Auto-configure Ollama when user needs local LLM deployment, free AI alternatives, or wants to eliminate OpenAI/Anthropic API costs. Trigger phrases: "install ollama", "local AI", "free LLM", "self-hosted AI", "replace OpenAI", "no API costs"
This skill is limited to using the following tools:
Automatically detect when users need local AI deployment and guide them through Ollama installation. Eliminates paid API dependencies.
When user mentions:
→ Activate this skill
# Check OS
uname -s
# Check available memory
free -h # Linux
vm_stat # macOS
# Check GPU
nvidia-smi # NVIDIA
system_profiler SPDisplaysDataType # macOS
8GB RAM:
16GB RAM:
32GB+ RAM:
macOS:
brew install ollama
brew services start ollama
ollama pull llama3.2
Linux:
curl -fsSL https://ollama.com/install.sh | sh
sudo systemctl start ollama
ollama pull llama3.2
Docker:
docker run -d \\
-v ollama:/root/.ollama \\
-p 11434:11434 \\
--name ollama \\
ollama/ollama
docker exec -it ollama ollama pull llama3.2
ollama list
ollama run llama3.2 "Say hello"
curl http://localhost:11434/api/tags
Python:
import ollama
response = ollama.chat(
model='llama3.2',
messages=[{'role': 'user', 'content': 'Hello!'}]
)
print(response['message']['content'])
Node.js:
const ollama = require('ollama')
const response = await ollama.chat({
model: 'llama3.2',
messages: [{ role: 'user', content: 'Hello!' }]
})
cURL:
curl http://localhost:11434/api/generate -d '{
"model": "llama3.2",
"prompt": "Hello!"
}'
# Before (Paid)
from openai import OpenAI
client = OpenAI(api_key="...")
# After (Free)
import ollama
response = ollama.chat(model='llama3.2', ...)
# Before (Paid)
from anthropic import Anthropic
client = Anthropic(api_key="...")
# After (Free)
import ollama
response = ollama.chat(model='mistral', ...)
ollama pull codellama
ollama run codellama "Write a Python REST API"
OpenAI GPT-4:
Ollama:
With GPU (NVIDIA/Metal):
CPU Only:
# Use quantized models
ollama pull llama3.2:7b-q4 # 4-bit (smaller)
# Use smaller model
ollama pull mistral:7b # Faster than 70B
# Pull model first
ollama pull llama3.2
ollama list # Verify
After skill execution, user should have:
local-llm-wrapper - Generic local LLM integrationai-sdk-agents - AI SDK with Ollama supportprivacy-first-ai - Privacy-focused AI workflows