* implement LoRA / QLoRA
- example of using MLX to fine-tune an LLM with low rank adaptation (LoRA) for a target task
- see also https://arxiv.org/abs/2106.09685
- based on https://github.com/ml-explore/mlx-examples/tree/main/lora
* add some command line flags I found useful during use
- --quiet -- don't print decorator text, just the generated text
- --prompt @/tmp/file.txt -- load prompt from file
* user can specify path to model OR model identifier in huggingface
* update mlx-swift reference
Co-authored-by: Ashraful Islam <ashraful.meche@gmail.com>
Co-authored-by: JustinMeans <46542161+JustinMeans@users.noreply.github.com>
- document the tokenizer used (https://github.com/huggingface/swift-transformers)
- provide a hook for tokenizer configuration, prompt augmentation
- this isn't as rich as the python equivalents but it helps a little
- handle loading models with different names for the safetensors files (gemma)
- handle merge tokens that can't be split
- organize code into Load/Evaluate