# Llama This is a port of the llama model from: - https://github.com/ml-explore/mlx-examples/blob/main/llms/mlx_lm/models/llama.py You can use this to load models from huggingface, e.g.: - https://huggingface.co/mlx-community/Mistral-7B-v0.1-hf-4bit-mlx See [llm-tool](../../Tools/llm-tool)