- remove async llm generation -- this is just doubling our work
- and does not match the style used in the example applications
- package generation parameters into a struct
- refactor command line arguments into distinct pieces based on their use
- this will be reusable in the lora commands
* Add Package.swift for LLM and MNIST
* Make ModelType properties public
* Make ModelType method createModel public
* Add installation instructions to readme
* Feat: LLMEval UI Improvements
1. adds Markdown rendering in the UI
2. Adds init time and token/second stat
3. Minor UI enhancements
* feat: adds a copy to clipboard button for llm outputs
* adds scrollviewreader to sync with main
* ran pre-format to resolve formatting issues
* updates the missing dependency in project definition
* feat: switch between plain text and markdown
adds a segemented picker to switch between plain text and markdown
* switch swift-tokenizers to main, remove some workarounds
- swift-tokenizers is getting a lot of updates and fixes, let's track main for now
- remove some workarounds that are no longer needed
- https://github.com/huggingface/swift-transformers/issues/63
- document the tokenizer used (https://github.com/huggingface/swift-transformers)
- provide a hook for tokenizer configuration, prompt augmentation
- this isn't as rich as the python equivalents but it helps a little
- handle loading models with different names for the safetensors files (gemma)
- handle merge tokens that can't be split
- organize code into Load/Evaluate