RocheLog

Testing AI to its breaking point.

● SYSTEM ONLINE

Llama-3-8B on MacBook M1: The Verdict

2025-12-28

I ran the quantized version using Ollama. It consumes 6GB RAM. Speed is decent (18 t/s). Good for RAG, bad for creative writing.[Read Log]

Avoid LangChain v0.2.x update

2025-12-27

Over-engineered abstraction. Stick to Vercel AI SDK if you want to keep your sanity. Code examples included.[Read Log]