The model used in this app is based on the model 'mlx-community/Mistral-7B-Instruct-v0.3-4bit' in the huggingface.com repository. It was further trained using mlx_lm.lora with content from my eBooks "Pharmacokinetics" and "Boomer Manual". Both on the Apple Bookstore. The original model and adpaters created during training were combined using mlx_fuse and upload to huggingface.com as "pharmpk/pk-mistral-7b-v0.3-4bit".
The objective of this model is to provide basic chatbot with additional insight into pharmaockinetic contents. The audiance for this application could be students and researcher in the area of pharmacokinetics with the warnning that this output is LLM generated and may not always by accurate. Future development of this might include additional pharmacokinetic content. This developer plans to explore the multitude of publically available models and MLX parameters to improvd the results provided by the app.
 
When this app is first opened the fused/combined model will be downloaded to the users device, (Apple Inteligence enabled iOS or macOS device). This can take awhile, maybe 5-10 minutes depending on the user's network speed and device. The 'Get Response' button will become enabled. The model will be cached on-device and subsequent start-up will much quicker.
The user enters their question (prompt) in the first text field. The prompt history can be optionally used when submitting flow-on questions. The user has the option to ask to more technical answers, more strictly related to the prompt, or more creatative answers which may stray from the prompt provided.
Technologies Used:
SwiftUI
MLX
Model Content Resources:
Pharmacokinetics by David Bourne
Boomer Manual by David Bourne
ML, LLM Resources
Exploring MLX Swift by Rudrank Riyam
Explore large language models on Apple silicon with MLX WWDC 2025
The machine learning model was created using Apple Intelligence and MLX including mlx_lm, mlx_lora and mlx_fuse.