GGUF format and llama.cpp quantization for efficient CPU/GPU inference. Use when deploying models on consumer hardware, Apple Silicon, or when needing flexible quantization from 2-8 bit without GPU requirements.
/plugin marketplace add zechenzhangAGI/AI-research-SKILLs/plugin install gguf-quantization@zechenzhangAGI/AI-research-SKILLs