Java primitives and instanceof: Why the rule is changing

Java primitives and instanceof: Why the rule is changing

For decades, Java has drawn a clear distinction between primitive types and reference types, with each category following its own rules in the language. One of those rules was simple: instanceof applies to reference types, not primitives. That separation has shaped how generations of Java developers reason about type checks and conversions.

One of our speakers at OCX 26, Manoj Nalledathu Palat, who leads work on the Java compiler at IBM, explains in an interview that this design choice was intentional. “It is illegal in classic Java, and fairly so,” he says, reflecting the original meaning of instanceof as a check tied to type derivation. What is changing now is not that rule itself, but the context in which Java developers increasingly use pattern matching, and the need for a more uniform way to reason about types across the language.

 

So why is Java revisiting this rule now?

The answer lies in how developers increasingly use pattern matching. “If you want to apply that pattern matching uniformly to both reference type and primitive types, then you need to bring in that concept,” Manoj explains. When instanceof is reframed as a way to ask whether a value can be safely treated as a particular type, rather than whether it is derived from one, primitives begin to fit the bill in the expanded meaning of instanceof.

This shift is not about making Java more permissive. It is about making existing behaviour more explicit, especially around primitive conversions. Manoj highlights a risk many teams underestimate: there is this risk of a silent bug being introduced when implicit primitive conversions occur. Unlike reference types, where invalid casts surface as runtime exceptions, primitive overflows and narrowing conversions can fail quietly and remain undetected.

To address this, Java introduces stronger compiler guarantees. Manoj says, “the compiler is with you on this,” describing checks that act as “a primitive’s answer to a class cast exception”. Rather than relying purely on developer discipline, the language increasingly provides guardrails that make unsafe assumptions visible.

Importantly, Primitives in Patterns is a preview feature. It is not something developers should adopt in production today. Instead, it offers insight into where the Java language is heading, and why long-held mental models around primitives are being carefully and deliberately updated.

 

Primitives in patterns – Providing foundational changes to next-gen Java types

In his session at the Open Community Experience 2026, Manoj Nalledathu Palat will help you rethink how Java treats primitives and reference types, and where the remaining differences still matter in real code. You will leave with a clearer mental model of primitive conversions, their risks, and the language guarantees the compiler provides.

 

Image
OCX

Daniela Nastase


Fresh Energy In March (2026 Wallpapers Edition)

Blooming flowers, longer days, milder temperatures — with March just around the corner, the world is slowly but surely awakening from its winter slumber, fueling us with fresh energy. And even if spring is far away in your part of the world, you might sense that 2026 has gained full speed by now, making it the perfect moment to turn those plans and ideas you’ve been carrying around into action.

To accompany you on all those adventures that March may bring, we have a new collection of desktop wallpapers for you, just as it has been a monthly tradition here at Smashing Magazine for more than 14 years already. Designed by artists and designers from across the globe, each wallpaper comes in a variety of screen resolutions and can be downloaded for free. A huge thank-you to everyone who shared their designs with us — this post wouldn’t be possible without your kind support!

If you, too, would like to get featured in one of our upcoming wallpapers editions, please don’t hesitate to submit your design. We can’t wait to see what you’ll come up with! Happy March!

  • You can click on every image to see a larger preview.
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.

Timid Blossom

“With Spring knocking and other seasons fighting to get attention, March greets us with blossoms.” — Designed by Ginger It Solutions from Serbia.

  • preview
  • with calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1020, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1020, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Cascade Style Sheet

Designed by Ricardo Gimenes from Spain.

  • preview
  • with calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160
  • without calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160

I’m Not Okay, But It’s Okay

Designed by Ricardo Gimenes from Spain.

  • preview
  • with calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160
  • without calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160

Let’s Spring

“After some freezing months, it’s time to enjoy the sun and flowers. It’s party time, colors are coming, so let’s spring!” — Designed by Colorsfera from Spain.

  • preview
  • without calendar: 320×480, 1024×768, 1024×1024, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Spring Is Coming

“This March, our calendar design epitomizes the heralds of spring. Soon enough, you’ll be waking up to the singing of swallows, in a room full of sunshine, filled with the empowering smell of daffodil, the first springtime flowers. Spring is the time of rebirth and new beginnings, creativity and inspiration, self-awareness, and inner reflection. Have a budding, thriving spring!” — Designed by PopArt Studio from Serbia.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1440×900, 1440×1050, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Explore The Forest

“This month, I want to go to the woods and explore my new world in sunny weather.” — Designed by Zi-Cing Hong from Taiwan.

  • preview
  • without calendar: 1024×768, 1152×864, 1280×720, 1280×800, 1280×960, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Coffee Break

Designed by Ricardo Gimenes from Spain.

  • preview
  • without calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160

Time To Wake Up

“Rays of sunlight had cracked into the bear’s cave. He slowly opened one eye and caught a glimpse of nature in blossom. Is it spring already? Oh, but he is so sleepy. He doesn’t want to wake up, not just yet. So he continues dreaming about those sweet sluggish days while everything around him is blooming.” — Designed by PopArt Studio from Serbia.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

So Tire

Designed by Ricardo Gimenes from Spain.

  • preview
  • without calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160

Botanica

Designed by Vlad Gerasimov from Georgia.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Music From The Past

Designed by Ricardo Gimenes from Spain.

  • preview
  • without calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160

Queen Bee

“Spring is coming! Birds are singing, flowers are blooming, bees are flying… Enjoy this month!” — Designed by Melissa Bogemans from Belgium.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

MARCHing Forward

“If all you want is a little orange dinosaur MARCHing (okay, I think you get the pun) across your monitor, this wallpaper was made just for you! This little guy is my design buddy at the office and sits by (and sometimes on top of) my monitor. This is what happens when you have designer’s block and a DSLR.” — Designed by Paul Bupe Jr from Statesboro, GA.

  • preview
  • without calendar: 1024×768, 1280×1024, 1440×900, 1920×1080, 1920×1200, 2560×1440

Spring Bird

Designed by Nathalie Ouederni from France.

  • preview
  • without calendar: 1024×768, 1280×1024, 1440×900, 1680×1200, 1920×1200, 2560×1440

Awakening

“I am the kind of person who prefers the cold but I do love spring since it’s the magical time when flowers and trees come back to life and fill the landscape with beautiful colors.” — Designed by Maria Keller from Mexico.

  • preview
  • without calendar: 320×480, 640×480, 640×1136, 750×1334, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1242×2208, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Fresh Lemons

Designed by Nathalie Ouederni from France.

  • preview
  • without calendar: 320×480, 1024×768, 1280×1024, 1440×900, 1600×1200, 1680×1200, 1920×1200, 2560×1440

Jingzhe

“Jīngzhé is the third of the 24 solar terms in the traditional East Asian calendars. The word 驚蟄 means ‘the awakening of hibernating insects’. 驚 is ‘to start’ and 蟄 means ‘hibernating insects’. Traditional Chinese folklore says that during Jingzhe, thunderstorms will wake up the hibernating insects, which implies that the weather is getting warmer.” — Designed by Sunny Hong from Taiwan.

  • preview
  • without calendar: 800×600, 1280×720, 1280×1024, 1366×768, 1400×1050, 1680×1200, 1920×1080, 2560×1440

Waiting For Spring

“As days are getting longer again and the first few flowers start to bloom, we are all waiting for spring to finally arrive.” — Designed by Naioo from Germany.

  • preview
  • without calendar: 1280×800, 1366×768, 1440×900, 1680×1050, 1920×1080, 1920×1200

Happy Birthday Dr. Seuss!

“March 2nd marks the birthday of the most creative and extraordinary author ever, Dr. Seuss! I have included an inspirational quote about learning to encourage everyone to continue learning new things every day.” — Designed by Safia Begum from the United Kingdom.

  • preview
  • without calendar: 800×450, 1280×720, 1366×768, 1440×810, 1600×900, 1680×945, 1920×1080, 2560×1440

Spring Is Inevitable

“Spring is round the corner. And very soon plants will grow on some other planets too. Let’s be happy about a new cycle of life.” — Designed by Igor Izhik from Canada.

  • preview
  • without calendar: 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 2560×1600

Ballet

“A day, even a whole month, isn’t enough to show how much a woman should be appreciated. Dear ladies, any day or month are yours if you decide so.” — Designed by Ana Masnikosa from Belgrade, Serbia.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1040, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Pizza Time

“Who needs an excuse to look at pizza all month?” — Designed by James Mitchell from the United Kingdom.

  • preview
  • without calendar: 1280×720, 1280×800, 1366×768, 1440×900, 1680×1050, 1920×1080, 1920×1200, 2560×1440, 2880×1800

Imagine

Designed by Romana Águia Soares from Portugal.

  • preview
  • without calendar: 640×480, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Questions

“Doodles are slowly becoming my trademark, so I just had to use them to express this phrase I’m fond of recently. A bit enigmatic, philosophical. Inspiring, isn’t it?” — Designed by Marta Paderewska from Poland.

  • preview
  • without calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

The Unknown

“I made a connection, between the dark side and the unknown lighted and catchy area.” — Designed by Valentin Keleti from Romania.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Let’s Get Outside

Designed by Lívia Lénárt from Hungary.

  • preview
  • without calendar: 1024×768, 1280×1024, 1366×768, 1600×1200, 1680×1200, 1920×1080, 1920×1200, 2560×1440

Fresh Flow

“It’s time for the water to go down the mountains, it’s time for the rivers to get rid of ice blocks, it’s time for the ground to feed the plants, it’s time to go out and take a deep breath. I imagined these ideas with interlacing colored lines.” — Designed by Philippe Brouard from France.

  • preview
  • without calendar: 1024×768, 1366×768, 1600×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 2560×1600, 2880×1800, 3840×2160

St. Patrick’s Day

“On the 17th March, raise a glass and toast St. Patrick on St. Patrick’s Day, the Patron Saint of Ireland.” — Designed by Ever Increasing Circles from the United Kingdom.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1080×1080, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Sending FriendShips To March

Designed by João Acácio from Portugal.

  • preview
  • without calendar: 800×600, 1024×768, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Bee-utiful Smile

Designed by Doreen Bethge from Germany.

  • preview
  • without calendar: 640×480, 800×600, 1024×768, 1152×864, 1280×720, 1280×800, 1280×960, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3200×2000

Sakura

Designed by Evacomics from Singapore.

  • preview
  • without calendar: 320×480, 768×1024, 1024×768, 1280×800, 1280×1024, 1440×900, 1920×1080, 2560×1440

Get Featured Next Month

Feeling inspired? We’ll publish the April wallpapers on March 31, so if you’d like to be a part of the collection, please don’t hesitate to submit your design. We are already looking forward to it!

FreeBSD AI-Written WiFi Driver for MacBook: Real-World Result

FreeBSD’s hardware support has always been its awkward footnote. The OS is rock-solid for servers. ZFS, jails, network performance — all excellent. But consumer laptops? That’s where things get messy. Broadcom WiFi chips, in particular, have been a pain point for years. Linux has brcmfmac. FreeBSD doesn’t.

Vladimir Varankin ran into this exact wall in early 2026 when he tried running FreeBSD on an old MacBook. The Broadcom chip inside wasn’t supported. The normal path — wait for a volunteer maintainer, submit a port request, hope someone cares — would take months at minimum. So he tried something different: he asked an AI to write the driver.

The result wasn’t a toy demo. It produced functional code that got his machine on a network. The Hacker News discussion that followed (February 2026, item #47129361) made clear this wasn’t just a neat trick — it touched something the BSD community has quietly worried about for years. Driver coverage is an existential problem for desktop FreeBSD adoption. AI might actually move the needle.

This piece breaks down what happened, why the approach worked, where it fell short, and what it means for systems engineers thinking about AI-assisted low-level development.

Key Takeaways

  • Vladimir Varankin used AI-assisted code generation to produce a working brcmfmac WiFi driver port for FreeBSD on a MacBook — a task that previously had no viable solution in the FreeBSD ecosystem.
  • This real-world result demonstrates that AI can handle low-level kernel driver work, not just application-layer scaffolding, when given precise context and iterative feedback.
  • The Hacker News thread on this project attracted hundreds of comments in early 2026, signaling that the BSD community views AI-assisted driver development as a legitimate near-term workflow, not a curiosity.
  • The biggest bottleneck wasn’t AI code quality — it was the engineer’s ability to provide accurate hardware specs, kernel API context, and debugging feedback across iterations.
  • This approach could meaningfully shrink FreeBSD’s hardware compatibility gap, which has historically limited adoption on consumer laptops like Apple MacBooks.

Why FreeBSD’s Driver Gap Exists

FreeBSD is not a niche project. According to the FreeBSD Wikipedia entry, it traces its lineage directly to the Berkeley Software Distribution Unix from the 1970s and has been continuously developed since 1993. Netflix, Sony PlayStation infrastructure, and Juniper Networks all run FreeBSD derivatives. It’s serious software.

But serious server software and good laptop hardware support are different problems entirely. Broadcom’s WiFi chips — common in MacBooks from roughly 2008 through 2016 — use a driver architecture that Linux’s brcmfmac handles through a combination of firmware loading and kernel integration. Porting that to FreeBSD’s kernel means understanding both the chip’s firmware interface and FreeBSD’s network driver stack simultaneously. That’s a non-trivial ask for volunteer contributors who mostly care about the server use case.

The MacBook specifically has been a frustrating target. Apple’s hardware is well-documented in one sense — the machines are popular enough that reverse-engineering efforts exist — but Broadcom’s firmware blobs and the chip’s initialization sequence have never had an official FreeBSD port. The Linux kernel’s brcmfmac driver, developed over years with input from Broadcom engineers, is the reference implementation almost everyone else points at.

Varankin’s situation in early 2026 was straightforward: old MacBook, fresh FreeBSD install, no WiFi. His writeup on vladimir.varank.in documents the process he followed to get from zero to a working connection using AI code generation as the primary development tool. The Hacker News thread that followed showed this resonated — not just as a hack, but as a potential workflow pattern.

The timing matters. By early 2026, large language models capable of generating syntactically correct C kernel code with reasonable semantic accuracy had become widely accessible. What Varankin demonstrated is that the bottleneck has shifted. It’s no longer “can AI write kernel code?” It’s “can a skilled engineer provide good enough context for AI to produce useful kernel code?”

What the AI Actually Produced (and Didn’t)

The result wasn’t a single prompt producing a complete, production-ready driver. Varankin’s writeup makes this clear. The process was iterative. He fed the AI the Linux brcmfmac source as reference material, described FreeBSD’s kernel driver interface requirements, and worked through multiple rounds of debugging.

What the AI handled well:

  • Structural translation from Linux kernel patterns to FreeBSD’s if_bge-style network driver conventions
  • Boilerplate generation for device attachment, detach, and interrupt handling routines
  • Firmware loading scaffolding — the code that pulls Broadcom’s firmware blob into memory at driver init

What required heavy human intervention:

  • Identifying which firmware blob version matched the specific MacBook’s chip revision
  • Debugging kernel panics caused by incorrect memory barrier placement
  • Validating that the interrupt handling matched FreeBSD’s actual IRQ model rather than Linux’s

The AI produced code that compiled. Getting it to run without panicking required Varankin’s own kernel debugging experience. That’s an important distinction. The AI compressed weeks of initial scaffolding work into hours. The last 20% — the subtle, hardware-specific debugging — still needed a human who understood what the kernel was actually doing.

This approach can fail when the engineer providing context lacks kernel debugging experience. Without the ability to interpret a panic trace or read dmesg output accurately, the iterative loop breaks down. The AI produces plausible-looking code. The engineer can’t tell why it’s crashing. Progress stalls. That failure mode isn’t hypothetical — multiple commenters in the Hacker News thread described similar dead ends on earlier AI-assisted driver attempts.

The Iterative Prompting Workflow

The workflow Varankin used shares structure with what experienced engineers are calling “context-heavy prompting” for systems code. It’s not “write me a WiFi driver.” It’s a sequence:

  1. Provide the Linux reference driver source
  2. Specify FreeBSD kernel version and API constraints
  3. Request a structural skeleton, review it, identify gaps
  4. Feed error messages, kernel logs, and dmesg output back into the conversation
  5. Iterate on specific functions — not the whole driver at once

This matters because it shows the human skill requirement hasn’t disappeared. It’s changed shape. Writing C kernel code from scratch required deep knowledge of both the hardware and the OS internals. The AI-assisted approach requires the ability to evaluate generated C kernel code, understand what the kernel logs are saying, and ask precise follow-up questions. That’s still a senior engineer skill set. Just a different one.

The Hacker News discussion highlighted this split clearly. Multiple commenters with FreeBSD kernel experience noted that Varankin’s debugging decisions — particularly around the firmware loading sequence — weren’t things a non-expert could have navigated even with AI assistance.

AI-Assisted vs. Traditional Driver Development

Criteria Traditional Volunteer Port AI-Assisted Development Commercial Driver Contract
Time to initial working code Weeks to months Days (with expert review) Weeks (scoped project)
Required expertise Deep kernel + hardware knowledge Kernel debugging + AI prompting Varies by contractor
Code quality (initial) High (expert-written) Medium (requires validation) High
Maintenance burden Depends on contributor availability High — AI won’t maintain it Contractual
Cost Volunteer time Engineer time + AI API costs $10K–$50K+ depending on scope
Best for Widely-used hardware with community interest Niche hardware, specific engineer need Enterprise with budget and specific HW requirement

The comparison reveals something important. AI-assisted development doesn’t beat traditional approaches on code quality or maintenance — it beats them on speed to functional prototype. For niche hardware like a specific MacBook’s Broadcom chip that lacks community interest, the traditional approach effectively produces nothing. AI-assisted gets to “working” faster than waiting for a volunteer who may never appear.

The trade-off is maintenance. AI-written code with no upstream maintainer is technical debt from day one. Anyone using this approach should treat the output as a starting point for a proper port, not a finished product.

What This Means for FreeBSD’s Hardware Coverage Problem

FreeBSD’s driver coverage gap is real and documented. The FreeBSD Foundation’s own hardware compatibility notes acknowledge that consumer WiFi chips — especially Broadcom — are poorly supported compared to Linux. This has historically meant that running FreeBSD on a laptop requires either an external USB WiFi adapter or accepting no wireless connectivity.

Varankin’s result suggests a viable middle path: engineers who need a specific driver and have the kernel debugging skills to validate AI-generated code can now produce working drivers faster than the traditional volunteer-contribution pipeline allows.

This doesn’t replace proper upstream drivers maintained by the FreeBSD team. A driver produced this way needs code review, testing across chip revisions, and long-term maintenance. But it changes the starting point dramatically. Getting from “no driver” to “something that boots and connects” used to take a skilled developer weeks. Varankin’s timeline, as described in his writeup, was measured in days.

And the implications extend beyond MacBooks. Broadcom chips power a wide range of consumer hardware. If this workflow proves replicable — and the Hacker News thread suggests engineers are already trying — FreeBSD’s hardware compatibility list on the desktop side could expand faster than it has in years.

OpenBSD and NetBSD face similar driver gaps. The workflow Varankin documented — Linux reference driver, AI translation, iterative debugging — isn’t FreeBSD-specific. Other BSD projects could adopt and adapt it with modest effort.

Practical Implications

If you’re a systems engineer running FreeBSD on hardware with missing driver support, this workflow is worth attempting — provided you have kernel debugging experience. The result shows the ceiling of what’s achievable is higher than most expected. But the floor is equally clear: without the ability to interpret kernel panics and dmesg output, AI-generated code won’t get you to a working system.

If you’re involved with FreeBSD core development, this pattern could accelerate the driver contribution pipeline significantly. A policy for accepting AI-assisted driver ports — with appropriate review requirements — would let the community convert more of these one-off engineering efforts into maintained upstream contributions. Watch the FreeBSD developer mailing lists; policy discussions on this are likely within the next six months.

Short-term actions worth taking now:

  • Identify the closest Linux reference driver for your target hardware and assess whether your team has the kernel debugging experience to validate AI output
  • Document your hardware’s chip revision precisely — exact firmware blob identifiers and chip variant info matter
  • Set up a FreeBSD kernel development environment with crash dump capture configured before starting any AI-assisted driver work

Longer-term:

  • Establish internal review checklists for AI-generated kernel code covering memory safety, interrupt handling correctness, and firmware loading sequence validation
  • Engage the FreeBSD community early if you produce a working driver — upstream acceptance requires code review that benefits from community knowledge
  • Static analysis tools like Coverity or FreeBSD’s own scan-build integration should run against any AI-generated kernel code before testing on real hardware

What Comes Next

The FreeBSD AI-written WiFi driver MacBook result answers a question the systems community has been asking quietly: can AI actually help with driver development, not just web apps? The answer is yes — conditionally.

AI-assisted driver development compresses weeks of scaffolding work into days when guided by an experienced kernel engineer. The human expertise requirement shifts from “write kernel code” to “evaluate and debug kernel code” — still demanding, but differently scoped. Code quality requires explicit validation. And the workflow is most valuable for niche hardware that traditional volunteer contribution pipelines would never prioritize.

The real shift isn’t that AI replaced a driver developer. It’s that the threshold for starting a driver port dropped significantly. An engineer who understands FreeBSD internals but couldn’t justify weeks of work on a niche Broadcom chip can now justify days of AI-assisted effort.

Expect more FreeBSD engineers to attempt this for other missing drivers over the next year. LLMs with longer context windows and better C reasoning will improve initial code quality, reducing iteration cycles. And the FreeBSD Foundation will likely need to formalize guidance on AI-assisted contributions sooner than anyone planned.

That’s a meaningful change for an OS that’s spent years watching its hardware compatibility list stagnate on the desktop side.

References

  1. Varankin, Vladimir. “FreeBSD doesn’t have Wi-Fi driver for my old MacBook. AI build one for me.” February 2026. vladimir.varank.in/notes/2026/02/freebsd-brcmfmac/
  2. FreeBSD. Wikipedia. en.wikipedia.org/wiki/FreeBSD
  3. FreeBSD doesn’t have Wi-Fi driver for my old MacBook. AI build one for me. Hacker News discussion, item #47129361. February 2026. news.ycombinator.com/item?id=47129361

🚀 Stop Guessing Which LLM Runs on Your Machine — Meet llmfit

llmfit demo

🚀 Stop Guessing Which LLM Runs on Your Machine — Meet llmfit

Running Large Language Models locally sounds exciting…
until reality hits:

  • Model too large ❌
  • VRAM insufficient ❌
  • RAM crashes ❌
  • Inference painfully slow ❌

Most developers waste hours downloading models that never actually run on their hardware.

That’s exactly the problem llmfit solves.

👉 GitHub: https://github.com/AlexsJones/llmfit

The Real Problem with Local LLMs

The local-LLM ecosystem exploded:

  • Llama variants
  • Mistral models
  • Mixtral MoE models
  • Quantized GGUF builds
  • Multiple providers

But here’s the uncomfortable truth:

Developers usually choose models blindly.

You see “7B”, “13B”, or “70B” and assume it might work.

Reality depends on:

  • System RAM
  • GPU VRAM
  • CPU capability
  • Quantization level
  • Context window
  • Multi-GPU availability

One wrong assumption → wasted downloads + broken setups.

What is llmfit?

llmfit is a hardware-aware CLI/TUI tool that tells you:

✅ Which LLM models actually run on your machine
✅ Expected performance
✅ Memory requirements
✅ Optimal quantization
✅ Speed vs quality tradeoffs

It automatically detects your CPU, RAM, and GPU, compares them against a curated LLM database, and recommends models that fit. ([docs.rs][1])

Think of it as:

“pcpartpicker — but for Local LLMs.”

Why This Tool Matters

Local AI adoption fails mostly because of hardware mismatch.

Typical workflow today:

Download model → Try run → Crash → Google error → Repeat

llmfit flips this:

Scan hardware → Find compatible models → Run successfully

This sounds simple — but it removes the biggest friction in local AI experimentation.

Key Features

🧠 Hardware Detection

Automatically inspects:

  • RAM
  • CPU cores
  • GPU & VRAM
  • Multi-GPU setups

No manual configuration required.

📊 Model Scoring System

Each model is evaluated across:

  • Quality
  • Speed
  • Memory fit
  • Context size

Instead of asking “Can I run this?”
you get ranked recommendations.

🖥 Interactive Terminal UI (TUI)

llmfit ships with an interactive terminal dashboard.

You can:

  • Browse models
  • Compare providers
  • Evaluate performance tradeoffs
  • Select optimal configurations

All from the terminal.

⚡ Quantization Awareness

This is huge.

Most developers underestimate how much quantization affects feasibility.

llmfit considers:

  • Dynamic quantization options
  • Memory-per-parameter estimates
  • Model compression impact

Its database assumes optimized formats like Q4 quantization when estimating hardware needs. ([GitHub][2])

Installation

cargo install llmfit

Or build from source:

git clone https://github.com/AlexsJones/llmfit
cd llmfit
cargo build --release

Then simply run:

llmfit

That’s it.

Example Workflow

Step 1 — Run Detection

llmfit

The tool scans your system automatically.

Step 2 — View Compatible Models

You’ll see recommendations like:

Model Fit Speed Quality
Mistral 7B Q4 ✅ Excellent Fast High
Mixtral ⚠ Partial Medium Very High
Llama 70B ❌ Not Fit

No guessing required.

Step 3 — Choose Smartly

Now you can decide:

  • Faster dev workflow?
  • Better reasoning?
  • Larger context window?

Based on real hardware limits.

Under the Hood

llmfit is written in Rust, which makes sense:

  • Fast hardware inspection
  • Low memory overhead
  • Native system access
  • CLI-first developer experience

It combines:

  • Hardware profiling
  • Model metadata databases
  • Performance estimation logic

to produce actionable recommendations.

Who Should Use llmfit?

✅ AI Engineers

Avoid downloading unusable checkpoints.

✅ Backend Developers

Quickly test local inference pipelines.

✅ Indie Hackers

Run AI locally without expensive GPUs.

✅ Students & Researchers

Maximize limited hardware setups.

The Bigger Insight

The future of AI isn’t just bigger models.

It’s right-sized models.

Most real-world applications don’t need a 70B model — they need:

  • predictable latency
  • reasonable memory usage
  • local privacy
  • offline capability

Tools like llmfit push developers toward efficient AI engineering, not brute-force scaling.

Final Thoughts

Local LLM tooling is evolving fast, but usability still lags behind.

llmfit fixes a surprisingly painful gap:

Before running AI, know what your machine can actually handle.

Simple idea. Massive productivity gain.

If you’re experimenting with local AI in 2026, this tool should probably be in your workflow.

⭐ Repo: https://github.com/AlexsJones/llmfit

The Economics of Calculator Content: How Free Tools Drive Organic Traffic

The Economics of Calculator Content: How Free Tools Drive Organic Traffic

When most content strategists talk about “SEO,” they’re thinking blogs. 1500-word guides, expert roundups, topical clusters. These are solid, but they’re also commoditized. Everyone writes them.

Calculator content is different. It’s less competitive, higher intent, and surprisingly monetizable. Over the past 18 months, I’ve built OnlineCalcAI—a platform serving 206+ calculators in 30 languages—and the economics are revelatory.

This is what I learned about why calculators are the hidden gem of content strategy.

Why Calculators Beat Blog Posts

1. Search Intent is Crystal Clear

When someone searches “age calculator,” they want a tool. Not an article about age calculation methods, not a TikTok explaining how to compute it manually—a working calculator.

Compare this to “how to calculate age,” which appears in 2-3 different search intents:

  • Users wanting a quick tool
  • Users wanting to understand the concept
  • Users wanting birthday ideas based on age
  • Parents checking developmental milestones

Blog posts split this traffic. Calculators own it.

Real data from OnlineCalcAI:

  • “BMI calculator” → 2,100 monthly searches, Position 1
  • “BMI chart for women” → 420 monthly searches, Position 3
  • “How to calculate BMI” → 890 monthly searches, Position 8

The calculator ranks #1. It gets clicked first because it directly answers the query.

2. Lower Content Competition

Tool-focused keywords have fewer competitors than blog keywords. Let’s compare difficulty:

Keyword Type Difficulty CPC
“protein calculator” Calculator 15 $0.40
“how to calculate protein” Blog 32 $0.35
“best protein calculator online” Review 45 $1.20

The calculator keyword is less contested because:

  • Not every site has developer capacity to build tools
  • Content marketers default to writing articles
  • Tools require ongoing maintenance (bugs, browser compatibility)

This is a moat. Low competition = easy ranking = consistent traffic.

3. Extreme Dwell Time & Engagement

Users interact with calculators. They don’t skim and bounce—they fill out forms, see results, try different inputs.

From our analytics (OnlineCalcAI, Jan-Feb 2026):

  • Average session duration: 4:32 (vs. 1:20 for blogs)
  • Bounce rate: 12% (vs. 68% for blog articles)
  • Return visitor rate: 31% (vs. 8% for one-off blog posts)

Google’s algorithms value engagement. High dwell time + low bounce rate = quality signal = ranking boost.

4. Viral Shareability

People share tools. They forward a calculator to a friend, embed it on their own site, mention it on Reddit.

One of our users embedded the loan calculator on their finance blog, creating a backlink (and referral traffic) naturally. We didn’t pitch anything. The tool was simply useful enough to deserve placement.

Blog posts get linked when they rank high. Tools get linked because they solve a problem.

The Featured Snippet Angle

Calculators dominate Position 0 in ways blog posts struggle to.

Most featured snippets are “definition” (paragraph) or “steps” (list) snippets. Calculator queries often appear as interactive snippets in Google’s new SGE (Search Generative Experience).

However, the real win isn’t SGE—it’s the sidebar calculator widget that Google shows for certain tools:

[Google Search: "compound interest calculator"]

Box 1: OnlineCalcAI calculator (embedded, interactive)
Box 2: "Featured snippet" with definition
Box 3-10: Blog posts and other resources

When you own that interactive box, you get:

  • Direct engagement (users don’t leave Google)
  • CTR lift (they click through because the embedded version isn’t enough)
  • Brand awareness (they see your URL, logo, name repeatedly)

From our data:

  • Keywords with embedded calculator widget: 3.2x CTR vs. regular snippets
  • Average daily impressions on “mortgage calculator”: 340 (without embedding)
  • Estimated impressions with embedding: 890+ (based on similar tools)

Multi-Language Scaling = Revenue Multiplier

This is where the economics get interesting.

One English calculator = decent traffic (500-2000 sessions/month).
One calculator × 30 languages = multiplied demand (15,000-60,000 sessions/month).

Why? Different languages have different search volumes:

Calculator English German French Spanish Portuguese
Age Calculator 1,200 890 650 1,100 480
BMI Calculator 2,100 1,800 1,200 1,650 950
Mortgage Calculator 3,400 2,200 1,800 2,100 1,300

Total monthly searches for 3 calculators:

  • English only: 6,700
  • 30 languages: ~100,000+

The traffic compounds. And with 206 calculators, you’re looking at 1M+ monthly searches across all languages.

Monetization Models

1. Advertising (AdSense, Affiliate Networks)

Our standard approach. Every page carries ads (with consent). CPM ranges:

  • International traffic (mixed): $2-5 CPM
  • US traffic: $8-12 CPM
  • EU traffic (GDPR): $1-3 CPM (consent rates drop ad demand)

For OnlineCalcAI:

  • Estimated monthly impressions: 2M+
  • Blended CPM: $4.50
  • Monthly revenue: ~$9,000 (conservative)

Not life-changing, but recurring revenue on evergreen content.

2. Premium / SaaS Upsell

Not every calculator needs to be free. High-intent users (tax calculators, retirement planning, cost estimators) will pay for advanced features:

  • Pro version: Advanced inputs, historical data, PDF export → $4.99/month
  • API access: Embed calculator on your site → $29/month
  • White-label: Customize branding → $99/month

This is where the real money is. You’re not monetizing casual users; you’re monetizing businesses that want your calculator on their site.

3. Affiliate Commissions

A mortgage calculator naturally flows into mortgage refinancing offers. A loan calculator leads to lending products.

Affiliate networks pay:

  • LendingClub, SoFi: 10-25% commission on funded loans
  • Insurance calculators: 5-10% per lead
  • Investment calculators: $5-20 per qualified lead

With 100K+ monthly visitors, even low conversion rates (0.5%) generate solid affiliate revenue.

Real Numbers: The OnlineCalcAI Case

Current state (Feb 2026):

  • 206 calculators
  • 30 languages = ~6,180 pages
  • ~18 months of SEO work
  • Zero paid advertising

Traffic metrics:

  • Monthly sessions: 65,000+
  • Monthly pageviews: 180,000+
  • Return visitor rate: 31%
  • Average session duration: 4m 32s

Revenue (conservative):

  • AdSense: $8,000-10,000/month
  • Affiliate commissions: $1,500-2,000/month
  • Total: ~$100K+ annualized

Cost structure:

  • Hosting (o2switch): $15/month
  • Domain: $12/year
  • Maintenance (1-2 hours/week): $0 (founder time)
  • Total: ~$200/year

ROI: 50,000%+ on operational costs (not counting content creation effort upfront).

The Compounding Effect

Here’s the strategic insight most people miss:

Each new calculator generates:

  1. Direct organic traffic (its own keyword ranking)
  2. Semantic boost (topical authority across tools)
  3. Internal linking juice (cross-links between calculators)
  4. Backlink attraction (people link to useful tools)

After 206 calculators, each new tool added ranks faster because domain authority is high.

Our calculator #180 reached position 3 for its target keyword in 3 weeks. Calculator #20 took 8 weeks.

This is what SEO compounding looks like.

Challenges (Be Honest About Them)

Browser Compatibility

Calculators must work everywhere—old IE, mobile, Safari on iOS 12. Blog posts don’t have this problem.

Cost: 15-20% of development time is cross-browser testing.

Maintenance Burden

A broken calculator ruins your brand. A broken blog post is just… forgotten.

We’ve had:

  • Math precision errors (JavaScript floating-point bugs)
  • Timezone issues in date calculators
  • Mobile input responsiveness glitches

Cost: Ongoing bug fixes, browser testing after each update.

Legal / Compliance

Tax calculators, loan calculators, health calculators = liability.

“This calculator is for educational purposes only” disclaimers are standard, but accuracy matters. One user relying on your investment calculator for real decisions = potential lawsuit.

Cost: Legal review, disclaimer templates, accuracy benchmarks.

Lessons for Your Strategy

If you’re considering building calculator content:

  1. Start with high-volume, low-competition keywords (50-200 monthly searches, difficulty < 30)
  2. Build 10-15 related calculators first – Don’t launch one. The semantic cluster effect is crucial
  3. Optimize for mobile – 65%+ of calculator users are on phones
  4. Make it shareable – Add “share result” buttons, embeddable widgets
  5. Go multi-language early – The ROI difference is massive
  6. Monetize conservatively – Don’t over-ad, or users will bounce
  7. Build integrations – API, embeds, Zapier integration → more backlinks, more distribution

Why This Works Better Than Chasing Trends

Blog content is competitive because everyone writes it. You’re fighting for scraps in a saturated market.

Calculator content is different. It’s:

  • Less competitive (fewer builders)
  • More durable (trend-proof)
  • Higher intent (users want tools, not opinions)
  • More monetizable (users are engaged, willing to explore premium features)

The economics reward builder-mentality marketers. If you can code (or hire someone who can), calculator platforms are an underexploited SEO gold mine.

Ready to build? Check out OnlineCalcAI to see 200+ calculators in action across 30 languages. Use one, study how it’s built, and consider what calculators your audience needs.

The next wave of organic traffic isn’t going to blogs. It’s going to tools.

What calculator would solve a real problem for your audience? Let’s discuss in the comments.

Disclaimer: Revenue figures are based on OnlineCalcAI’s publicly available analytics and conservative CPM estimates. Affiliate and SaaS figures are projections based on industry benchmarks.

Testing Microservice Changes from Git Worktrees End to End Without the Terminal Tab Explosion

TLDR: made a visual cli-tool called Recomposable to tackle this issue.

If you use Claude Code with git worktrees, you probably have multiple branches of the same repository checked out simultaneously. Claude works in one worktree, you review in another. This works well for single-service projects, but it breaks down when you run microservices.

The problem: you need to verify that the changes Claude made to one service still work with the rest of your stack. This means rebuilding that one service from the worktree’s code while keeping everything else on main, and Docker Compose has no concept of worktrees, it only knows about files on disk, so you’re on your own. I have solved this for myself with the cli-tool Recomposable, which I will discuss further down the page.

What you have to do today

Say Claude Code is working on your auth-service in a worktree, and you want to test its changes against the rest of your stack. Here’s the manual workflow:

Step 1: Find the worktree path.

Claude Code creates worktrees in .claude/worktrees/ with generated names, so you need to find it first.

git worktree list
# /Users/you/project                   abc1234 [main]
# /Users/you/project/.claude/worktrees/jan-abc123  def5678 [fix-oauth]

Step 2: Build the service from the worktree.

You need to combine the worktree path with the compose file path, specify the service, and build:

docker compose -f /Users/you/project/.claude/worktrees/jan-abc123/docker-compose.yml build auth-service

Step 3: Start it.

docker compose -f /Users/you/project/.claude/worktrees/jan-abc123/docker-compose.yml up -d auth-service

Step 4: Verify.

Open another tab, tail the logs, check the other services, maybe rebuild if it failed.

Step 5: Switch back.

When you’re done testing, repeat steps 2-3 but pointing back at the original compose file.

Every step requires you to remember or paste the worktree path, and if you have multiple compose files you also need the correct file name. There’s no overview of which services are running code from which branch – you just have to remember.

You can script this with aliases or a Makefile target, but you still lack a way to see at a glance which service is running from which worktree, and every new worktree means updating your aliases.

What recomposable does instead

recomposable is a Docker Compose TUI I built for development workflows, and version 1.1.4 adds worktree switching as a first-class feature.

Press t on any service, and a picker shows all available git worktrees. Select one with j/k and Enter, and the service is stopped, rebuilt from the target worktree’s compose file, and started. That’s it.

When services run from different branches, a WORKTREE column appears automatically with non-main branches highlighted in yellow, so you see at a glance which service runs from which branch.

SERVICE              STATUS    BUILT     PORTS          WORKTREE
auth-service         running   2m ago    5001           fix-oauth
api-gateway          running   1h ago    8080           main
web-app              running   3d ago    3000           main

No paths to remember, no compose file flags, no switching terminals to check which branch a container was built from.

How it works

When you press t, recomposable runs git worktree list --porcelain from the compose file’s directory to discover all worktrees, and the picker shows each worktree’s branch name and path.

On selection, it maps the compose file path to the equivalent path in the target worktree. If your compose file is at infra/docker-compose.yml relative to the git root, it looks for the same relative path in the target worktree and shows an error if the file doesn’t exist or the service isn’t defined in the target compose file.

The override is stored per service, and all subsequent operations — rebuild, restart, logs, exec, watch, dependency-aware cascade rebuild — automatically use the overridden compose file. Switching back is the same action: press t and select the original worktree.

The Claude Code workflow

This is the workflow I built the feature for:

  1. Start recomposable in your project directory, with your full stack running on main.
  2. Open Claude Code and tell it to work on auth-service in a worktree.
  3. Claude makes its changes, and you want to verify them end-to-end.
  4. In recomposable, navigate to auth-service, press t, and select the worktree Claude is working in.
  5. The service rebuilds from Claude’s branch while the rest of the stack stays on main.
  6. Check logs, run requests, and verify — if something is wrong, check the logs right there in the TUI.
  7. When done, press t again and switch back to main.

No terminal tabs, no path juggling, no guessing which branch is running.

This scales to any number of services. Claude working on three services across two worktrees? Switch all three, and the WORKTREE column shows you exactly what’s running where.

Future work

The worktree feature covers the core workflow, but there are gaps worth addressing for teams that use this pattern heavily:

Auto-detect worktree changes. Currently you manually press t to switch, but recomposable could watch for new worktrees appearing (e.g., Claude Code just created one) and prompt you to switch the affected service, removing the “go check if Claude is done, then switch” polling loop.

Worktree-aware cascading rebuilds. The dependency-aware rebuild (d) already restarts transitive dependents, but if auth-service is switched to a worktree and api-gateway depends on it, the cascade doesn’t switch api-gateway too. For tightly coupled services that should be tested from the same branch, a “cascade switch” would reduce manual steps.

Diff preview before switching. Before rebuilding from a worktree, showing a summary of what changed (files modified, services affected) would help you decide whether the switch is worth the rebuild time, especially when Claude has been working for a while and you’re not sure what exactly changed.

Branch status indicators. Showing whether a worktree’s branch is ahead/behind main, or has uncommitted changes, would help you decide which worktree to test — a branch that’s 30 commits ahead is a different risk profile than one that changed a single config file.

Multi-service worktree switch. A single action to switch all services from one compose file to the same worktree, because when Claude works on changes that span multiple services in the same repo, switching them one by one is unnecessary friction.

Install

npm install -g recomposable

Navigate to a directory with a docker-compose.yml, create a recomposable.json pointing to your compose files, and run recomposable. The worktree feature works out of the box with any git repository that has multiple worktrees.

GitHub | npm

Your AI Coding Assistant is Probably Writing Vulnerabilities. Here’s How to Catch Them.

Hi there, my fellow people on the internet. Hope you’re doing well and your codebase isn’t on fire (yet).

So here’s the thing. Over the past year I’ve been watching something unfold that genuinely worries me. Everyone and their dog is using AI to write code now. Copilot, Cursor, Claude Code, ChatGPT, you name it. Vibe coding is real, and the productivity gains are no joke. I’ve used these tools myself while building Kira at Offgrid Security, and I’m not about to pretend they aren’t useful.

But I’ve also spent a decade in security, building endpoint protection at Microsoft, securing cloud infrastructure at Atlassian, and now running my own security company. And that lens makes it impossible for me to look at AI-generated code and not ask my favorite question: what can go wrong?

Turns out, a lot.

The Numbers Don’t Lie (And They Aren’t Pretty)

Veracode recently published their 2025 GenAI Code Security Report after testing code from over 100 large language models. The headline finding? AI-generated code introduced security flaws in 45% of test cases. Not edge cases. Not obscure languages. Common OWASP Top 10 vulnerabilities across Java, Python, JavaScript, and C#.

Java was the worst offender with a 72% security failure rate. Cross-Site Scripting had an 86% failure rate. Let that sink in.

And here’s the part that surprised even me: bigger, newer models don’t do any better. Security performance has stayed flat even as models have gotten dramatically better at writing code that compiles and runs. They’ve learned syntax. They haven’t learned security.

Apiiro’s independent research across Fortune 50 companies backed this up, finding 2.74x more vulnerabilities in AI-generated code compared to human-written code. That’s not a rounding error. That’s a systemic problem.

Why Does This Keep Happening?

If you think about how LLMs learn to code, it makes total sense. They’re trained on massive amounts of publicly available code from GitHub, Stack Overflow, tutorials, blog posts. The thing is, a huge chunk of that code is insecure. Old patterns, missing input validation, hardcoded credentials, SQL queries built with string concatenation. If the training data is full of bad habits, the model will confidently reproduce those bad habits.

The other piece is that LLMs don’t understand your threat model. They don’t know your application’s architecture, your trust boundaries, your authentication flow. When you ask for an API endpoint, the model will happily generate one that accepts input without validation, because you didn’t tell it to validate. And honestly, most developers don’t include security constraints in their prompts. That’s the whole premise of vibe coding: tell the AI what you want, trust it to figure out the how.

The problem is that “the how” often skips the security bits entirely.

I categorize these into three buckets:

The Obvious Stuff – missing input sanitization, SQL injection, XSS. These are the classics that have been plaguing us for two decades and LLMs are very good at reintroducing them because they’re overrepresented in training data.

The Subtle Stuff – business logic flaws, missing access controls, race conditions. The code looks correct. It passes basic tests. But it’s missing the guardrails that a security-conscious developer would add. This is harder to catch because there’s no obvious “bad pattern” to scan for.

The Novel Stuff – hallucinated dependencies (packages that don’t exist but an attacker could register), overly complex dependency trees for simple tasks, and the reintroduction of deprecated or known-vulnerable libraries. This one is uniquely AI-flavored and it’s growing fast.

So What Do We Do About It?

Here’s where I get to talk about what I’ve been building.

At Offgrid Security, the core problem we’re solving with Kira is: can we use AI to actually catch the security issues that AI introduces? Fight fire with fire, if you will.

We recently released kira-lite, an MCP (Model Context Protocol) server that plugs directly into your AI-powered development workflow. If you haven’t been following the MCP ecosystem, here’s the quick version: MCP is a standard protocol that lets AI assistants connect to external tools and data sources. Think of it as giving your AI coding assistant the ability to call out to specialized services while it’s working.

The idea behind kira-lite is straightforward. Instead of generating code and hoping for the best, your AI assistant can call kira-lite during the development process to scan for security issues before the code is even written to disk. It sits in the workflow, not after it.

Here’s how you’d set it up:

Claude Code:
claude mcp add --scope user kira-lite -- npx -y @offgridsec/kira-lite-mcp

Cursor / Windsurf / Other MCP Clients:

{
"kira-lite": {
"command": "npx",
"args": ["-y", "@offgridsec/kira-lite-mcp"]
}
}

No API keys. No accounts. No external servers. One command and you’re scanning.

What Makes This a One-Stop Solution

I’ve used a lot of security scanners in my career. Most of them do one thing okay. Some catch secrets, some catch injection flaws, some handle dependency vulnerabilities. Kira-lite was built to be the thing you don’t have to supplement with five other tools.

Here’s what ships out of the box:

376 built-in security rules across 15 languages and formats.

We’re not talking about a toy regex scanner here.
This covers JavaScript, TypeScript, Python, Java, Go, C#, PHP, Ruby, C/C++, Shell, Terraform, Dockerfile, Kubernetes YAML, and more. Each language has framework-specific rules too.
Django’s DEBUG=True in production, Spring’s CSRF disabled, Express.js missing helmet middleware, React’s dangerouslySetInnerHTML with unsanitized input. The stuff that generic scanners miss because they don’t understand the framework context.

92 secret detectors.
Not just AWS keys and GitHub tokens. We’re detecting credentials for Anthropic, OpenAI, Groq, HuggingFace, DeepSeek (relevant given the AI coding boom), plus cloud providers like GCP, DigitalOcean, Vercel, Netlify, Fly.io. CI/CD tokens for CircleCI, Buildkite, Terraform Cloud. SaaS tokens for Atlassian, Okta, Auth0, Sentry, Datadog. Payment keys for Stripe, PayPal, Square, Razorpay. The list goes on. If an AI coding assistant hardcodes a credential (and they love doing this), kira-lite will catch it.

Dependency vulnerability scanning across 11 ecosystems.
This one is huge. Kira-lite scans your lockfiles against the OSV.dev database (the same data source behind Google’s osv-scanner) for known CVEs. It supports npm, PyPI, Go, Maven, crates.io, RubyGems, Packagist, NuGet, Pub, and Hex. Thirteen lockfile formats total. Remember what I said about AI assistants introducing too many dependencies? This is how you catch the ones with known vulnerabilities before they become your problem.

Full OWASP coverage.
And I don’t mean “we cover a few items from the Top 10.” Kira-lite maps to OWASP Top 10:2025, OWASP API Security Top 10, and OWASP LLM Top 10:2025. That last one is particularly relevant. It catches things like LLM output being passed directly to eval() or exec(), prompt injection patterns, and user input concatenated into prompt templates. If you’re building AI-powered applications (and who isn’t, these days), these are the vulnerabilities that existing scanners completely ignore.

Five distinct MCP tools that your AI assistant can invoke contextually:

`scan_code scans a snippet before it’s written to disk. The AI literally checks its own work before handing it to you.

scan_file scans an existing file and automatically triggers dependency scanning if it hits a lockfile.

scan_diff compares original vs modified code and reports only new vulnerabilities. This is incredibly useful during refactors where you don’t want noise from pre-existing issues.

scan_dependencies does a full dependency audit on demand.

fix_vulnerability provides remediation guidance for specific vulnerability IDs or CWEs.`

And the scanning happens entirely on your machine. Kira-lite ships with Kira-Core, a compiled Go binary bundled for macOS, Linux, and Windows. Your code never leaves your laptop. For anyone working on proprietary codebases or in regulated industries, that’s not a nice-to-have, it’s a requirement.

Why MCP and Why Now?

I’ve been thinking about this a lot. The traditional security tooling model is built around gates and checkpoints. Write code, commit, run CI pipeline, scanner finds issues, developer goes back to fix. It works, but it’s slow and creates friction that developers (understandably) resent.

With MCP, the security tool becomes a collaborator rather than a gatekeeper. The AI assistant can proactively check its own work. It can call scan_code before presenting a snippet to you, catch the SQL injection in the Python function or the missing authentication check on the API endpoint, and fix it in the same conversation. No context switch. No waiting for CI. No separate dashboard to check.

With Claude Code, you can even set it up so that every edit is automatically scanned. Drop a CLAUDE.md file in your project that tells Claude to call scan_code before every write operation, and you’ve essentially got a security co-pilot riding shotgun on every line of AI-generated code.

This isn’t a magic bullet. I want to be clear about that. No tool catches everything, and the security landscape for AI-generated code is evolving faster than any single solution can keep up with. But the shift from “scan after the fact” to “scan during generation” is significant. It’s the difference between finding the fire after it’s spread and catching the spark.

Things I’d Recommend Right Now

Whether you use kira-lite or not, here are some things I’d strongly suggest if your team is using AI coding assistants:

Don’t trust, verify. Treat AI-generated code the same way you’d treat code from a new contractor who doesn’t know your codebase. Review it. Question it. Don’t assume it’s handling edge cases or security concerns just because it compiles.

Add security context to your prompts. If you’re asking an AI to write an API endpoint, explicitly say “include input validation, authentication checks, and parameterized queries.” It won’t add these by default.

Automate scanning in the loop. Whether it’s through an MCP server like kira-lite, a SAST tool in your CI pipeline, or both, don’t ship AI-generated code without automated security analysis. The volume of code being generated is too high for manual review alone.

Watch your dependencies. AI assistants love adding packages. Check that those packages actually exist, are maintained, and don’t have known vulnerabilities. Package hallucination is a real attack vector now. Tools like kira-lite’s dependency scanner can automatically check your lockfiles against CVE databases, which saves you from manually auditing every npm install your AI assistant decides to run.

Educate your team. The developers using AI tools need to understand that “working code” and “secure code” are not the same thing. This isn’t about slowing people down. It’s about building awareness so they know what to look for.

The Road Ahead

I genuinely believe AI is going to transform how we build software. I’m building an AI security company, so clearly I’m bought in on that future. But we’re in this weird in-between phase where the tools are powerful enough to generate massive amounts of code and not yet smart enough to make that code secure by default.

That gap is where the next wave of security work lives. It’s where I’m spending all my time right now, and honestly, it’s one of the most interesting problems I’ve worked on in my career.

If you’re working in this space too, or if you’re a developer trying to figure out how to use AI tools without accidentally introducing a bunch of CVEs, I’d love to chat. Hit me up on LinkedIn or check out what we’re building at Offgrid Security.

And if you want to try kira-lite, the package is up on npm: @offgridsec/kira-lite-mcp. One npx command, zero config, and 376 rules scanning your code before it ever hits the filesystem. I think you’ll find it genuinely useful.

Will be back soon with more on this topic. There’s a lot more to unpack, especially around how agent-based workflows are creating entirely new attack surfaces.

Keep hacking till then ();