📄️ Overview
LLMForge APIs are in Beta.
📄️ Bring your own data
How to bring your own data for fine-tuning.
📄️ Modifying hyperparameters
A brief guide on customization of your fine-tuning job.
📄️ Bring any Hugging Face model
Fine-tune any 🤗 HF transformer model and any prompt format.
📄️ Continue fine-tuning from a previous checkpoint
How to use a previous checkpoint for another round of fine-tuning.
📄️ Run finetuning as Anyscale Job
How to submit a finetuning experiment as a job (useful for CI/CD)
📄️ LoRA vs. full-parameter training
A quick reminder on differences between LoRA vs. full-parameter.
📄️ Optimizing cost and performance for fine-tuning
How to setup YAML configs for optimal cost or throughput.
📄️ Preference Tuning with DPO
Direct Preference Optimization on Anyscale
📄️ (Preview) Seamless fine-tuning and serving with the Models SDK/CLI
How to leverage Anyscale's LLM Models SDK/CLI to serve custom models seamlessly