<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Posts on AI Coding Blog</title><link>https://mpklu.github.io/posts/</link><description>Recent content in Posts on AI Coding Blog</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Mon, 06 Apr 2026 12:00:00 -0400</lastBuildDate><atom:link href="https://mpklu.github.io/posts/index.xml" rel="self" type="application/rss+xml"/><item><title>Gemma 4 Structured-Task Performance: Field Report from a Local-First App</title><link>https://mpklu.github.io/posts/gemma-4-benchmark/</link><pubDate>Mon, 06 Apr 2026 12:00:00 -0400</pubDate><guid>https://mpklu.github.io/posts/gemma-4-benchmark/</guid><description>&lt;h1 id="gemma-4-structured-task-performance-field-report-from-a-local-first-app"&gt;Gemma 4 Structured-Task Performance: Field Report from a Local-First App&lt;/h1&gt;
&lt;p&gt;&lt;em&gt;Benchmark data and prompt-format findings from deploying Gemma 4 E4B in a real application. Intended for LLM teams (Gemma, Ollama) and developers building structured-output pipelines on local models.&lt;/em&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="context"&gt;Context&lt;/h2&gt;
&lt;p&gt;We build &lt;a href="https://github.com/mpklu/gary"&gt;Gary&lt;/a&gt;, a privacy-first personal assistant CLI for macOS. It runs entirely locally — an encrypted database, a daemon process, and a local LLM via ollama. The LLM handles three structured tasks:&lt;/p&gt;</description></item><item><title>The Vibe Coding Trap</title><link>https://mpklu.github.io/posts/vibe-coding-trap/</link><pubDate>Mon, 23 Mar 2026 00:30:00 -0400</pubDate><guid>https://mpklu.github.io/posts/vibe-coding-trap/</guid><description>&lt;p&gt;&lt;em&gt;Let&amp;rsquo;s be real. When the first AI coding agents dropped, we all nodded solemnly and said, &amp;ldquo;Of course, a human will always review every single change. Safety first.&amp;rdquo;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;We lied to ourselves. Or, more accurately, we succumbed to the seductive illusion of frictionless productivity—a &lt;a href="https://mpklu.github.io/posts/10x-illusion/"&gt;10x illusion&lt;/a&gt; where we feel like we&amp;rsquo;re coding faster, but we&amp;rsquo;re actually just accumulating debt we can&amp;rsquo;t afford to pay.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The recent &lt;a href="https://stackoverflow.blog/2026/03/19/ai-is-becoming-a-second-brain-at-the-expense-of-your-first-one/"&gt;Stack Overflow post on AI as a second brain&lt;/a&gt; identifies the core issue: we are offloading our judgment. This isn&amp;rsquo;t a future sci-fi risk; &lt;a href="https://stackoverflow.blog/2026/03/19/ai-is-becoming-a-second-brain-at-the-expense-of-your-first-one/"&gt;cognitive offloading&lt;/a&gt; is happening now, and it&amp;rsquo;s reshaping both our codebases and our minds.&lt;/em&gt;&lt;/p&gt;</description></item><item><title>10x Illusion</title><link>https://mpklu.github.io/posts/10x-illusion/</link><pubDate>Fri, 20 Mar 2026 18:43:42 -0400</pubDate><guid>https://mpklu.github.io/posts/10x-illusion/</guid><description>&lt;h1 id="the-10x-illusion-if-ai-codes-10x-faster-how-much-faster-do-projects-actually-ship"&gt;The 10x Illusion: If AI Codes 10x Faster, How Much Faster Do Projects Actually Ship?&lt;/h1&gt;
&lt;p&gt;&lt;em&gt;AI coding tools are getting shockingly good. So it&amp;rsquo;s natural to ask: if the coding part gets 10x faster, shouldn&amp;rsquo;t the whole project get 10x faster too?&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The answer is surprisingly counterintuitive — and backed by a growing body of data.&lt;/em&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="the-speed-is-real-the-extrapolation-is-not"&gt;The Speed Is Real. The Extrapolation Is Not&lt;/h2&gt;
&lt;p&gt;AI coding tools deliver genuine speed on implementation tasks. GitHub Copilot studies show developers completing isolated coding tasks &lt;strong&gt;55% faster&lt;/strong&gt;. AI agents can generate entire modules in minutes. The speed is not the illusion.&lt;/p&gt;</description></item></channel></rss>