← All articles

#vibe

6 articles

All articles tagged "vibe" — self-hosted AI fixes, setups, and architecture notes.

Sovereign MCP Server: Local Setup, Integration, and Hard Lessons

Sovereign MCP Server: Local Setup, Integration, and Hard Lessons

Learn how to run a self-hosted MCP server for your blog’s knowledge base, integrate it with OpenClaw and Vibe, and avoid the pitfalls I hit while migrating from cloud to Sovereign AI.

Read article →
A hands-on comparison of AI coding tools testing local inference vs cloud dependency for privacy-first workflows.
strategymistral

Hands-on AI Coding Tools: Why I Kept Claude Code + Vibe and Dumped Cursor and Continue.dev

A hands-on comparison of AI coding tools testing local inference vs cloud dependency for privacy-first workflows.

A deep dive into optimizing Mistral Small 4 for local technical blogging, with practical solutions for session memory, image generation, and EEAT compliance.
strategymistral

Six Weeks Running Mistral Small 4 as a Production Tool: What I Actually Learned

A deep dive into optimizing Mistral Small 4 for local technical blogging, with practical solutions for session memory, image generation, and EEAT compliance.

Three separate 400 Bad Request causes in Mistral Vibe with SGLang, their root causes, and update-safe fixes
fixdevopsmistralsglang

Vibe 400 Bad Request Fix: Mistral Alternating Roles and reasoning_effort

Three separate 400 Bad Request causes in Mistral Vibe with SGLang, their root causes, and update-safe fixes

How strict workflow rules and tool constraints prevent AI agents from destroying your codebase during file edits.
fixdevopsgiteamcpmistral

Vibe write_file Overwrite Bug: When Edits Silently Replace Whole Files

How strict workflow rules and tool constraints prevent AI agents from destroying your codebase during file edits.

How we got Mistral Small 4 119B inference working on NVIDIA DGX Spark's ARM64 GB10 chip with SGLang, including backend selection, speculative decoding, and Vibe CLI optimizations.
fixdevopsmistralsglang

SGLang on DGX Spark: 35-41 tok/s with EAGLE Speculative Decoding

How we got Mistral Small 4 119B inference working on NVIDIA DGX Spark's ARM64 GB10 chip with SGLang, including backend selection, speculative decoding, and Vibe CLI optimizations.