<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Structured-Output on AI Coding Blog</title><link>https://mpklu.github.io/tags/structured-output/</link><description>Recent content in Structured-Output on AI Coding Blog</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Mon, 06 Apr 2026 12:00:00 -0400</lastBuildDate><atom:link href="https://mpklu.github.io/tags/structured-output/index.xml" rel="self" type="application/rss+xml"/><item><title>Gemma 4 Structured-Task Performance: Field Report from a Local-First App</title><link>https://mpklu.github.io/posts/gemma-4-benchmark/</link><pubDate>Mon, 06 Apr 2026 12:00:00 -0400</pubDate><guid>https://mpklu.github.io/posts/gemma-4-benchmark/</guid><description>&lt;h1 id="gemma-4-structured-task-performance-field-report-from-a-local-first-app"&gt;Gemma 4 Structured-Task Performance: Field Report from a Local-First App&lt;/h1&gt;
&lt;p&gt;&lt;em&gt;Benchmark data and prompt-format findings from deploying Gemma 4 E4B in a real application. Intended for LLM teams (Gemma, Ollama) and developers building structured-output pipelines on local models.&lt;/em&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="context"&gt;Context&lt;/h2&gt;
&lt;p&gt;We build &lt;a href="https://github.com/mpklu/gary"&gt;Gary&lt;/a&gt;, a privacy-first personal assistant CLI for macOS. It runs entirely locally — an encrypted database, a daemon process, and a local LLM via ollama. The LLM handles three structured tasks:&lt;/p&gt;</description></item></channel></rss>