Compress large language models using knowledge distillation from teacher to student models. Use when deploying smaller models with retained performance, transferring GPT-4 capabilities to open-source models, or reducing inference costs. Covers temperature scaling, soft targets, reverse KLD, logit distillation, and MiniLLM training strategies.
/plugin marketplace add zechenzhangAGI/AI-research-SKILLs/plugin install knowledge-distillation@zechenzhangAGI/AI-research-SKILLs