Python Unplugged on PyTV Recap

Last week marked the fruition of almost a year of hard work by the entire PyCharm team. On March 4th, 2026, we hosted Python Unplugged on PyTV, our first-ever community conference featuring a 90s music-inspired online conference for the Python community.

Python Unplugged on PyTV – Free Online Python Conference

The PyCharm team is a fixture at Python conferences globally, such as PyCon US and EuroPython, but we recognize that while attending a conference can be life-changing, the costs involved put it out of reach for many Pythonistas.

We wanted to recreate the entire Python conference experience in a digital format, complete with live talks, hallway tracks, and Q&A sessions, so anyone, anywhere in the world, could join in and participate.

And we did it! Superstar speakers from across the Python community joined us in our studio in Amsterdam, Netherlands – the country where Python was born. Some of them traveled for over 10 hours, and one even joined with their newborn baby! Travis Oliphant, of Numpy and Scipy fame, was ultimately unable to join us in person, but he kindly pre-recorded a wonderful talk and participated in a live Q&A after it, despite it being very early morning in his time zone. 

Cheuk Ting Ho, Jodie Burchell,  Valerie Andrianova

The PyCharm team is extremely grateful for the community’s support in making this happen.

The event

We livestreamed the entire event from 11am to 6:30pm CET/CEST, almost seven and a half hours of content, featuring 15 speakers, a PyLadies panel, and an ongoing quiz with prizes. Topics covered the future of Python, AI, data science, web development, and more.

Here is the complete list of speakers and timestamped links to their talks:

  • Carol Willing – JupyterLab Core Developer
  • Deb Nicholson – Executive Director, Python Software Foundation
  • Ritchie Vink – Creator of Polars
  • Travis Oliphant – Creator of NumPy
  • Sarah Boyce – Django Fellow 
  • Sheena O’Connell – Python Software Foundation Board Member 
  • Marlene Mhangami – Senior Developer Advocate at Microsoft 
  • Carlton Gibson – Creator of multiple open-source projects in the Django ecosystem
  • Tuana Çelik – Developer Relations Engineer at LlamaIndex
  • Merve Noyan – Machine Learning Engineer at Hugging Face 
  • Paul Everitt – Developer Advocate at JetBrains
  • Mark Smith – Head of Python Ecosystem at JetBrains
  • Georgi Ker – Director and Fellow of the Python Software Foundation
  • Una Galyeva – Head of AI at Geobear Global and PyLadies Amsterdam organizer 
  • Jessica Greene – Senior Machine Learning Engineer at Ecosia
The studio room with presenter’s desk and Q&A table.
Production meeting the day before the event

We spent the afternoon doing final checks and a run-through with the studio team at Vixy Live. They were very professional and patient with us as we were working in a studio for the first time. With their help, we were confident that the event the next day would go smoothly.

Livestream day

On the day of the livestream, we arrived early to get our makeup done. The makeup artists were absolute pros, and we all looked great on camera. One of our speakers, Carol, jokingly said that she is now 20 years younger! The hosts, Jodie, Will, and Cheuk, were totally covered in ‘90s fashion and vibes.

Python Team Lead Jodie Burchell bringing the 90s back

We also had swag designed by our incredible marketing team, including t-shirts, stickers, posters, and tote bags.

PyTV Stickers for all participants
PyTV Totebags

PyTV posters

Python content for everyone

After a brief opening introducing the conference and the event Discord, we began with a series of talks focused on the community, learning Python, and other hot Python topics. We also had two panels, both absolutely inspiring: one on the role of AI in open source and another featuring prominent members of PyLadies.

Following our first block of speakers, we moved on to web development-focused talks from key people involved with the Django framework. We then had a series of talks from experts across the data science and AI world, including speakers from Microsoft, Hugging Face, and LlamaIndex, who gave us up-to-date insights into open-source AI and agent-based approaches. We ended with a talk by Carol Willing, one of the most respected figures in the Python community.

Throughout the day, we ran a quiz for the audience to test their knowledge about Python and the community. Since we had many audience members learning Python, we hope they learned some fun facts about Python through the quiz.

First of 8 questions on the Python ecosystem

Sarah Boyce, Will Vincent, Sheena O’Connell, Carlton Gibson, Marlene Mhangami

Next year?

Looking at the numbers, we had more than 5,500 people join us during the live stream, with most of them watching at least one talk. We’ve since had another 8,000 people as of this writing watch the event recording.

We’d love to do this event again next year. If you have suggestions for speakers, topics, swag, or anything else please leave it in the comments!

Watch now

JavaLand 2026

JavaLand 2026

JavaLand 2026 was finally back in the theme park after the last two year’s disaster at a car racing track. And this was good. Europa Park is different from Phantasialand, but has the same kitchy, German way of going all in being true to the theme. From the African and Chinese vibes in Phantasialand, we were now in Italy, Spain, France, and Marrakech. All mixed together in a way only possible in Germany.

1493 attendees is a very good number for a conference these days. And the German Java community is very vibrant and opinionated, so it is almost impossible to go from one end of the exhibition floor to another without stopping for a couple of conversations on the way. In my opinion, these hallway track conversations are the most important part of a conference, and JavaLand has a lot of them.

My talk, The Past, Present, and Future of Enterprise Java, was set up in the Dome, which seats 700 people. It is hard to know the number of occupied chairs, but I estimate it to be somewhere between 100 and 150 attending my talk.

In the evening on the first day, the park opened up a couple of the attractions. Unfortunately, a thunderstorm passed by at that time, so going on a rollercoaster didn’t really tempt me. Dinner at one of the restaurants was a much better option. I don’t really like rollercoasters anyway, but that is a secret 🙂

One thing I missed year was the traditional JavaLand Jogging on Wednesday morning. Since I had arrived to the European time zone only a day before I travelled to JavaLand, I managed well without it. I did go for a short run on Thursday morning before breakfast, though.

Ivar Grimstad


Devnexus 2026

Devnexus 2026

I can’t believe this was the ninth time I was a speaker at Devnexus. The last couple of years, we have sponsored and had a dedicated Jakarta EE track and the entire team present, so it was a bit different to be handling the booth as the only one from Eclipse Foundation present. This time, we shared the booth with MicroProfile in the community area. Luckily, I had great help from our community with staffing the booth. Mike, Emily, Rustam, Eudris, and Fred all helped out making it successful.

My talk, What Spring Developers Should Know About Jakarta EE, is a fairly popular talk that I have done a couple of times now. It was well attended and I received great feedback from the attendees afterward.

As always, the most important track at Devnexus is the Hallway Track. It is amazing how many good conversation it is possible to have over a few days when you hang out with 1400 attendees, speakers, volunteers, and others over a couple of days.

And, of course, we had the traditional #runWithJakartaEE morning runs. We usually run around the Mercedes Benz Stadium before finishing up the 5km in Centennial Olympic Park. The park is closed when we start running at 6:30AM, but opens up at 7:00AM, and that is about the time we return from the loop around the stadium.

Ivar Grimstad


Beyond `border-radius`: What The CSS `corner-shape` Property Unlocks For Everyday UI

When I first started building websites, rounded corners required five background images, one for each corner, one for the body, and a prayer that the client wouldn’t ask for a different radius. Then the border-radius property landed, and the entire web collectively sighed with relief. That was over fifteen years ago, and honestly, we’ve been riding that same wave ever since. Just as then, I hope that we can look at this feature as a progressive enhancement slowly making its way to other browsers.

I like a good border-radius like any other guy, but the fact is that it only gives us one shape. Round. That’s it. Want beveled corners? Clip-path. Scooped ticket edges? SVG mask. Squircle app icons? A carefully tuned SVG that you hope nobody asks you to animate. We’ve been hacking around the limitations of border-radius for years, and those hacks come with real trade-offs: borders don’t follow clip-paths, shadows get cut off, and you end up with brittle code that breaks the moment someone changes a padding value.

Well, the new corner-shape changes all of that.

What Is corner-shape?

The corner-shape property is a companion to border-radius. It doesn’t replace it; it modifies the shape of the curve that border-radius creates. Without border-radius, corner-shape does nothing. But together, they’re a powerful pair.

The property accepts these values:

  • round: the default, same as regular border-radius,
  • squircle: a superellipse, the smooth Apple-style rounded square,
  • bevel: a straight line between the two radius endpoints (snipped corners),
  • scoop: an inverted curve, creating concave corners,
  • notch: sharp inward cuts,
  • square: effectively removes the rounding, overriding border-radius.

And you can set different values per corner, just like border-radius:

*corner-shape: bevel round scoop squircle;
/* top-left, top-right, bottom-right, bottom-left */

You can also use the superellipse() function with a numeric parameter for fine-grained control.

.element { 
  border-radius: 25px;
  corner-shape: superellipse(0); /* equal to 'bevel' */
}

So the question here might be: why not call this property “border-shape” instead? Well, first of all, that is something completely different that we’ll get to play around with soon. Second, it does apply to a bit more than borders, such as outlines, box shadows, and backgrounds. That’s the thing that the clip-path property could never do.

Why Progressive Enhancement Matters Here

At the time of writing (March 2026), corner-shape is only supported in Chrome 139+ and other Chromium-based browsers. That’s a significant chunk of users, but certainly not everyone. The temptation is to either ignore the property until it’s everywhere or to build demos that fall apart without it.

I don’t think either approach is right. The way I see it, corner-shape is the perfect candidate for progressive enhancement, just as border-radius was in the age of Internet Explorer 6. The baseline should use the techniques we already know, such as border-radius, clip-path, radial-gradient masks and look intentionally good. Then, for browsers that support corner-shape, we upgrade the experience. Sometimes this can be as simple as just providing a more basic default; sometimes it might need to be a bit more.

Every demo in this article is created with that progressive enhancement idea. The structure for the demos looks like:

@layer base, presentation, demo;

The presentation layer contains the full polished UI using proven techniques. The demo layer wraps everything in @supports:

@layer demo {
  @supports (corner-shape: bevel) {
    /* upgrade styles here */
  }
}

No fallback banners, no “your browser doesn’t support this” messages. Just two tiers of design: good and better. I thought it could be nice just to show some examples. There are a few out there already, but I hope I can add a bit of extra inspiration on top of those.

Demo 1: Product Cards With Ribbon Badges

Every e-commerce site has them: those little “New” or “Sale” badges pinned to the corner of a product card. Traditionally, getting that ribbon shape means reaching for clip-path: polygon() or a rotated pseudo-element, let’s call it “fiddly code” that has the chance to fall apart the moment someone changes a padding value.

But here’s the thing: we don’t need the ribbon shape in the baseline. A simple badge with slightly rounded corners tells the same story and looks perfectly fine:

.product__badge {
  border-radius: 0 4px 4px 0;
  background-color: var(--badge-bg);
}

That’s it. A small, clean label sitting flush against the left edge of the card. Nothing fancy, nothing broken. It works in every browser.

For browsers that support corner-shape, we enhance:

@layer demo {
  /* If the browser supports `corner-shape` */
  @supports (corner-shape: bevel) {
    .product {
      border-radius: 40px;
      corner-shape: squircle;
    }

    .product__badge {
      padding: 0.35rem 1.4rem 0.35rem 1rem;
      border-radius: 0 16px 16px 0;
      corner-shape: round bevel bevel round;
    }
  }
}

The round bevel bevel round combination creates a directional ribbon. Round where it meets the card edge, beveled to a point on the other side. No clip-path, no pseudo-element tricks. Borders, shadows, and backgrounds all follow the declared shape because it is the shape.

The cards themselves upgrade from border-radius: 12px to a larger size and the squircle corner-shape, that smooth superellipse curve that makes standard rounding look slightly off by comparison. Designers will notice immediately. Everyone else will just say it “feels more premium.”

Hot tip: Using the squircle value on card components is one of those upgrades where the before-and-after difference can be subtle in isolation, but transformative across an entire page. It’s the iOS effect: once everything uses superellipse curves, plain circular arcs start looking out of place. In this demo, I did exaggerate a bit.

The primary button starts beveled, faceted, and gem-like, and softens to squircle on hover. Because corner-shape values animate via their superellipse() equivalents, the transition is smooth. It’s a fun interaction that used to be hard to achieve but is now a single property (used alongside border-radius, of course).

The secondary button uses superellipse(0.5), a value that is between a standard circle and a squircle, combined with a larger border-radius for a distinctive pill-like shape. The danger button gets a more prominent squircle with a generous radius. And notch and scoop each bring their own sharp or concave personality.

Beyond buttons, the status tags get corner-shape: notch, those sharp inward cuts that give them a machine-stamped look. The directional arrow tags use round bevel bevel round (and its reverse for the back arrow), replacing what used to require clip-path: polygon(). Now borders and shadows work correctly across all states.

Hot tip: corner-shape: scoop pairs beautifully with serif fonts and warm color palettes. The concave curves echo the organic shapes found in editorial design, calligraphy, and print layouts. For geometric sans-serif designs, stick with squircle or bevel.

What I like about this demo is how the shape hierarchy mirrors the content hierarchy. The most important element (featured plan) gets the most distinctive shape (scoop). The badge gets the sharpest shape (bevel). Everything else gets a simpler upgrade (squircle). Shape becomes a tool for visual emphasis, not just decoration.

Browser Support

As of writing, corner-shape is available in Chrome 139+ and Chromium-based browsers. Firefox and Safari don’t support it yet. The spec lives in CSS Borders and Box Decorations Module Level 4, which is a W3C Working Draft as of this writing.

For practical use, that’s fine. That’s the whole point of how these demos are built. The presentation layer delivers a polished, complete UI to every browser. The demo layer is a bonus for supporting browsers, wrapped in @supports (corner-shape: ...). I lived through the time when border-radius was only available in Firefox. Somewhere along the line, it seems like we have forgotten that not every website needs to look exactly the same in every browser. What we really want is: no “broken” layouts and no “your browser doesn’t support this” messages, but rather a beautiful experience that just works, and can progressively enhance a bit of extra joy. In other words, we’re working with two tiers of design: good and better.

Wrapping Up

The approach I keep coming back to is: don’t design for corner-shape, and don’t design around the lack of it. Design a solid baseline with border-radius and then enhance it. The presentation layer in every demo looks intentionally good. It’s not a degraded version waiting for a better browser. It’s a complete design. The demo layer adds a dimension that border-radius alone can’t express.

What surprises me most about corner-shape is the range it offers — the amazing powerhouse we have with this single property: squircle for that premium, superellipse feel on cards and avatars; bevel for directional elements and gem-like badges; scoop for editorial warmth and visual hierarchy; notch for mechanical precision on tags; and superellipse() for fine control between round and squircle. And the ability to mix values per corner (round bevel bevel round, scoop round) opens up shapes that would have required SVG masks or clip-path hacks.

We went from five background images to border-radius, to corner-shape. Each step removed a category of workarounds. I’m excited to see what designers do with this one.

Further Reading

  • corner-shape (MDN)
  • “What Can We Actually Do With corner-shape?”, Daniel Schwarz
  • CSS Borders and Box Decorations Module Level 4 (W3C specification)
  • A fun demo for “eco-labels”, Sebastian on CodePen

And I just wanted to balance my check book…

Living on a fixed income and getting charged over draft fees is a real kick in the pants! I can’t seem to understand all my auto-pays and recurring bills with a pen and paper. So I set out to build Kalverion_bot that did that for me on Telegram with an OpenClaw gateway that does little else than natural language parsing. This is what I came up with:

Kalverion_bot on GitHub

🦞 Built with OpenClaw for AI-powered Telegram interaction
📒 Double-entry accounting
📊 Cashflow forecasting
🔁 Recurring bills & income
💳 Debt payoff optimization
📈 Financial graphs
🤖 AI transaction parsing with Natural Language

How I Updated 1,000+ CTAs on My Blog Without Writing a Single Line of Code

The Problem

I was migrating my community platform. Sounds simple, right? Just change a link.

But I had a “small” detail: 1,000+ articles with hardcoded CTAs in the HTML pointing to the old community URL.

Why hardcoded? Because I’m obsessive about efficiency. Every extra WordPress plugin adds milliseconds of load time. The CTAs were directly in the code.

The Obvious Solutions (And Why They Don’t Work)

Option 1: SQL Replace

UPDATE wp_posts SET post_content = 
REPLACE(post_content, 'old-community-url.com', 'skool.com/new-community');

Problem: This solves the link, but wastes the opportunity.

Each article is different:

  • Posts about startup funding opportunities → CTA about connecting with investors
  • Posts about AI tools → CTA about implementation
  • Posts about analysis → CTA about going deeper

A blind replacement generates generic CTAs. I didn’t want that.

Option 2: Manual (One by One)

Open 1,000+ posts. Read each one. Generate contextual CTA. Update.

Problem: 100+ hours of tedious work. And I’m human — I get tired, distracted, make mistakes.

Option 3: Custom Script

Write a Python/Node script that reads the post, uses AI to analyze the content, generates a contextual CTA, and updates WordPress.

Problem: Days of development. Debugging. Maintenance. For something I’ll do once.

The Real Solution: n8n + Groq + Llama 3.3

I needed something that was:

  • Intelligent (semantic understanding of content)
  • Fast (can’t wait weeks)
  • Economical (ideally free)
  • Reusable (for future changes)
  • Visual (easy to adjust without rewriting code)

Enter: n8n + Groq + Llama 3.3

The Stack

  • n8n (self-hosted): Visual workflow orchestrator
  • Groq API: Free access to super-fast open source models
  • Llama 3.3 70B: Meta’s model with strong reasoning
  • WordPress REST API: For reading and updating posts

The Workflow (Step by Step)

1. Get Posts from WordPress

HTTP Request node → GET /wp-json/wp/v2/posts?per_page=100

Parameters:

  • per_page=100 (max per batch)
  • _fields=id,title,content,link (only what’s needed)

2. Process One by One

“Split in Batches” node → batch_size = 1

Why one by one? To control rate limits and see progress in real time.

3. The Brain: LLM Agent (Groq + Llama 3.3)

System Prompt:

You are a content editor specialized in CTAs for startup blogs.

RULES:
1. If NO CTA exists → Add one before the last paragraph
2. If there's an old community URL → Replace with new URL
3. Update button color if needed

CTA based on content type:
- Funding/investment posts → "Connect with similar founders..."
- AI/Tools → "Discover how others are implementing this..."
- Analysis → "Go deeper on these topics..."

Respond in JSON:
{
  "content": "updated HTML or null",
  "hasChanges": true/false
}

User Prompt:

Title: {{ $json.title }}
Categories: {{ $json.categories }}
Content: {{ $json.content }}

4. Decision: Update or Skip?

IF node → {{ $json.hasChanges }} === true

If TRUE → Update WordPress

If FALSE → Log “No changes needed”

5. Update WordPress

HTTP Request node → POST /wp-json/wp/v2/posts/{{ $json.postId }}

Body:

{
  "content": "{{ $json.updatedContent }}"
}

6. Loop Back

Returns to “Split in Batches” node → next post

The Real Numbers

Metric Result
Workflow design 2 hours
Execution (1,000 posts) ~1.5 hours
Cost $0 (Groq free tier)
Posts updated 847 (rest were already fine)
Contextual CTAs generated 847
Lines of code written 0

Comparison:

  • SQL replace: 5 min, but generic CTAs ❌
  • Manual: 100+ hours, inconsistent ❌
  • Custom script: 2-3 days development + debugging ❌
  • n8n + AI: 3.5 hours total, perfect result

Why This Matters

1. No-Code + AI = Multiplied Judgment

I didn’t replace my judgment with AI. I multiplied it 1,000x.

I defined:

  • WHAT: Update CTAs with relevant context
  • WHY: Migration + improve conversion

The AI executed the HOW with semantic understanding of the content.

2. Visual > Scripts for Real Business Cases

A visual workflow in n8n is:

  • ✅ Easier to understand (even for “future me”)
  • ✅ Faster to adjust (drag & drop)
  • ✅ Easier to reuse (duplicate and modify)

I didn’t write code because I didn’t need code.

3. Open Source LLMs Are Production-Ready

Llama 3.3 70B via Groq:

  • 100-200 tokens/second (10x faster than OpenAI)
  • Free (with reasonable limits)
  • Comparable quality to GPT-4o for structured tasks

You don’t need GPT-5 for this. Open source is enough.

Lessons Learned

Do Well:

  • Start with a small batch: Test with 10 posts before processing all 1,000
  • Detailed logs: Each post logged (updated/skipped/error)
  • Conservative rate limits: 1 post every 2-3 seconds (avoids throttling)
  • Structured outputs: Guaranteed JSON with schema validation
  • Idempotency: Running twice doesn’t break anything (detects already-updated posts)

Avoid:

  • Not testing enough: Almost launched with the full batch without validating output
  • Blindly trusting AI: Always validate a sample before running at scale
  • Ignoring WordPress cache: Had to purge CDN cache after

The Future of This Stack

This workflow isn’t disposable. I’m going to reuse it for:

  • Seasonal CTA updates (Black Friday, annual opportunities)
  • Message A/B testing (change CTA massively, measure conversion)
  • Format migration (if I change the CTA design in the future)
  • Content translation (same flow, different prompt)

Investment: 2 hours

ROI: Infinite (I’ll use it 10+ more times)

Conclusion

I had a real business problem: 1,000+ posts with CTAs that needed updating with context.

The “easy” solutions (SQL) were insufficient. The “complex” solutions (manual/script) were inefficient.

n8n + Groq + Llama 3.3 = The perfect middle ground.

This isn’t “the future” — this is today.

The tools exist. They’re free (or cheap). They’re accessible.

The question isn’t “can I do this?”

The question is “what else can I automate this way?

What would you automate with this approach? Share in the comments.

📝 Originally published in Spanish at cristiantala.com

Why Your Fintech API Code Examples Are a Liability

A developer copies a code example from your API docs. Changes the account ID. Updates the amount. Hits send.

The money moves. To the wrong account. Using a deprecated endpoint your docs still show as valid.

In a social media API, a wrong code example posts the wrong image. Embarrassing. In a financial services API, a wrong code example moves money. And Ctrl+Z is not a feature that banking infrastructure supports.

Wrong API examples in fintech aren’t documentation bugs. They’re operational incidents waiting to happen.

The Numbers

Stat Source
75% of production APIs have variances from their OpenAPI specs Nordic APIs / APIContext
70% of devs rate code examples as the #1 most important doc component SmartBear 2020 State of API
55% of teams struggle with inconsistent/outdated documentation Postman 2025 State of the API
83% of developers consider doc quality when evaluating an API Monetizely
71% have chosen one API over another because of better docs Monetizely

Three-quarters of APIs don’t match their own docs. The thing developers trust most is code examples. And doc quality is the deciding factor for most adoption decisions.

3 Ways Code Examples Silently Break

1. The endpoint changes. The example doesn’t.

Payments team ships V3 of the transfers endpoint. API reference gets updated. The four code examples in tutorials still point to V2. V2 still works — for now.

# Your docs still show this:
POST /v2/transfers

# Your API is actually on:
POST /v3/transfers

Developer builds an entire integration on a deprecated path.

2. A required field gets added. The example lies.

Compliance mandates a new purpose_code field on international transfers. The API rejects requests without it. But the quickstart guide doesn’t include it.

// What your docs show:
{
  "amount": 10000,
  "currency": "USD",
  "destination": "acct_xxx"
}

// What the API actually requires:
{
  "amount": 10000,
  "currency": "USD",
  "destination": "acct_xxx",
  "purpose_code": "P0101"  // 400 error without this
}

3. Default values change. Silently.

Your API defaults settlement_speed from "standard" (T+1) to "instant". The code example doesn’t specify the field because “the default is fine.”

Every developer copying that example now gets instant settlement — with different fees — and doesn’t know it.

This is the one that should terrify you.

The Real-World Cost

This isn’t theoretical:

  • PayPal had endpoints listed in docs that didn’t exist, undocumented webhook delays, and session timeouts merchants discovered through trial and error — in production
  • Healthcare: A deprecated SOAP endpoint remained accessible for 6 months while vulnerabilities were only patched in newer REST services. 450,000 patient records exposed.
  • Financial services: Average $300,000+ per hour in API-related downtime costs
  • Failed payments cost the global economy $118.5 billion annually

Write Once, Pray Forever

Here’s how fintech API documentation actually works in practice: An engineer writes a feature, someone writes the docs and code examples, and on the day they’re written, everything is accurate. Then the feature evolves over 6-12 months. The API reference gets updated — maybe. But the code examples scattered across guides and tutorials? Almost certainly not.

76% of failed API integrations result from inadequate documentation or support.

What Stripe, Twilio, and Plaid Do

Stripe: SDK generation pipeline built on Ruby DSL → OpenAPI specs → auto-generated library code in multiple languages. They also built and open-sourced Markdoc. Interactive examples see 62% higher engagement.

Twilio: Migrated 5,000+ pages with Yoyodyne. Samples update automatically when API or codegen tool changes. Zero manual sync.

Plaid: Discovered developers bypassed navigation for search. Expanded their search index by hundreds of entries. Behavior data drives docs improvement.

Common thread: They treat code examples as testable code, not documentation text.

Your 4-Week Safety Net

Week 1: Audit your quickstart. Fresh environment. Run every example.
Week 2: Fix the broken quickstart examples. Highest traffic first.
Week 3: Add example testing to CI. One file, one test. Fail the build.
Week 4: Expand to top 5 integration guides. Set up freshness tracking.

Tools That Help

Tool What It Does
Pact Contract testing — catches breaking API changes before production
Vale Open source prose linter for style guide enforcement
EkLine Managed docs automation — style, links, terminology on every PR in CI/CD
Spectral OpenAPI linting — catches spec drift

The Bottom Line

In most software, documentation that’s 95% accurate is fine. In financial services, the 5% that’s wrong is where someone loses money.

Outdated docs isn’t a typo. It’s a bug. In fintech, that bug moves money.

We’ll be at Fintech Meetup at the end of March. If any of this resonated, let’s connect. ekline.io

“You Wouldn’t Steal a DIV”: How I Built My Portfolio

A story about how I built my portfolio and what went through my mind while building it

Hello, World! Here’s my first ever blog.

I wanted to talk a bit about my portfolio, how I built it, why I made some of these architectural and design decisions, and why I shamelessly ‘borrowed’ my way to designs I love.

1 Stack, 2 Stack, Tech Stack, 4

(If you didn’t read that header in Slim Shady’s voice, I don’t know what to tell you.)

First of all, let’s talk tech stack. I used to have all of my personal projects (including my previous portfolio) built with Next.js. These were small side projects, without complicated logic, and with simple needs. Yet, I would always run into Next.js edge cases. Turbopack was failing my dev builds because it wouldn’t play nice with Contentlayer, HMR was taking actual seconds to refresh, and production builds were taking 50 seconds for projects that barely scaled. When I got to choose a stack for a new project at my job, I couldn’t justify the Next.js overhead for production apps, considering the friction I had with small personal projects.

At that time, everyone was talking about Tanstack Start: the “DX God.” But even their starter app had linting and TS issues, so I wasn’t fully impressed. I stuck with only Tanstack Router for that project, and as weeks passed by working with the entire Tanstack Ecosystem (except Start), I was really enjoying it. The documentation is well written and extensive (even too much sometimes), DX actually made sense, and I could find all of the features I liked about Next.js (file base routing for example) while still having great performance (15s builds).

So this led me to give a second chance to Tanstack Start, and I built this portfolio with it! I still love Vercel’s UX, and wander around their dashboard to see how I should implement my stuff, but I think I’ll leave Next.js behind for now. And personal projects are made to experiment with stuff. So I just want to try out different tech and see what I like or not, purely on feelings.

Then for the component library, I was really into coss ui. I stumbled upon it randomly one day and loved it so much. My Nathan’s AI project already had a UI heavily inspired by cal.com: send button, message suggestions… So when I saw they had a shadcn/ui library with that kind of style, it was perfect for what I needed.

I can also consider myself a proud Open Source Contributor after getting two PRs merged into coss ui (a 2-line diff and I’m not even kidding, but quality over quantity, I guess…?).

Stealing…I Mean, Getting Inspired By Chanhdai and Zed

Now I’m not going to lie and say I designed and coded everything myself. I took large parts of the code and page layout from the open-source portfolio by Chanhdai and design inspiration from zed.dev, which I had previously attempted in unfinished side projects (https://trends.brodin.dev and https://ui.brodin.dev if they’re still available).

I still went a different way than Chanhdai for the code implementation. For example, I chose to store my content in markdown files using Content Collections. I did this specifically so I could easily serve my portfolio’s content in plain text, perfect for LLM consumption, which ties right back into the needs of my Nathan’s AI project.
I also used Base-ui (because of coss ui) for components, and generally went with a different style. So this was not just a simple copy/paste or a fork, there was still a lot of work involved.

For Zed, I took the same fonts for the headings and text, and the grid layout with the “diamonds” at the intersections. It really adds personality to my portfolio (not my personality since I stole it, but some personality).

Also, by a great coincidence, Chanhdai had very basic sound design, like a sound when changing the theme, and I wanted to push things further by having different sounds around the entire app. I was looking around on the web for good sound libraries and all, but couldn’t find anything interesting. Then the next day, I saw a tweet about the upcoming release of Soundcn, which was exactly what I was looking for. So now you get a bunch of sounds when clicking on stuff in my portfolio. It’s a nice touch!

From good to WHOA

Lastly, the thing that elevated my portfolio from a good portfolio to a Whoa portfolio are the dither shaders (from Paper Shaders) that I have for page headings and section dividers. I had dither shaders in mind since the first day I saw them on React Bits, but couldn’t find a good way to integrate them. So it ended up being a crossover between inspiration from the heading of zed.dev (open the dev console and see data-paper-shader on the headings) and Fumadocs, which had them as well.

To wrap things up

So in conclusion, I would say this portfolio is a cross between Chanhdai and Zed, with a better stack.

Go ahead, click around to hear the sound integration (it gets annoying after a while, be careful), check out the dither shaders on the headings, and check out the source code here if you want to steal from me like I did from everyone (but at least leave a star on the repo).

  • Blog Title reference: This classic: “You Wouldn’t Steal a Car”
  • My Portfolio: brodin.dev
  • Initially posted on my blog

Escaping DevOps hell with Codex

If you are a developer, you are probably well aware of all the AI goodness that has been happening. I won’t bore you with the hyperbole.

My weapon of choice is Codex. Other AI coding tools are available, but debating which one is best at what is not a very interesting conversation to me. What I want to focus on here is DevOps.

I’m the CTO of a small company, which means I switch hats a lot. I used to call myself a backend developer, but these days I do everything vaguely tech-related in our company, plus marketing, sales, and a bunch of other supporting roles. There are only a few of us in the company. If it needs doing, one of us needs to do it. ChatGPT and Codex allow us to get shit done.

DevOps sucks. It really does.

One of the hats I end up wearing is DevOps. I hate doing DevOps. I’ve been exposed to this stuff for well over two decades. I know how to do it properly. I’ve done everything from uploading zip files over ISDN lines and remotely restarting Tomcat, as you did in the early 2000s, to automating deployments with Puppet, using CloudFormation in AWS, faffing about with Kubernetes, Docker Swarm, Terraform, and much more. Lately, my weapons of choice are Ansible, Docker, and Docker Compose. I’ve invested countless quantities of time in learning all that stuff and trying to apply it.

Why do I think DevOps sucks? To me, it feels like dropping out of warp, to use a Star Trek analogy. You have all these grand plans to get some big feature out, and then you find yourself micromanaging some insanely arcane shit in Linux to get it to tell the time correctly, deal with some convoluted networking thing, or whatever. You get blocked for weeks on end. All that to solve the age-old problem of “put this fucking thing over there and run it!” (pardon my French). I call this problem inception. You start with a grand vision: “Our shiny new backend is ready to go, let’s deploy it and announce it to the world.” Somehow, that escalates into: “I need to figure out how to set up a bastion and private networking so I don’t expose my database to the public internet.” One thing leads to another, and before you know it, you’ve sunk three months into the project.

DevOps should be simple but isn’t

DevOps is supposed to be about automating what should be automated. So, why does DevOps still feel so manual? The answer is that this stuff is genuinely complicated, and over decades we have built systems full of bear traps with terrible failure modes: data loss, security breaches, downtime, and worse. There is just a lot of stuff that a DevOps person needs to know and do. Taking shortcuts can lead to disaster. That’s why it often ends up being a full-time job.

Every once in a while, I get sucked into doing a stretch of DevOps that makes me feel stupid, because it should be simple. Instead, I end up pulling my hair out for days trying to solve weird shit that refuses to work without ritualistic bullshit, magical command-line incantations, and configuration files that need to be exactly right. I know some truly excellent operations and DevOps people and, honestly, I suffer a bit from imposter syndrome whenever I have to do this stuff myself. I’m skilled enough to be dangerous, and I know it.

Codex takes the pain away

I got pulled into the latest round of this two weeks ago. We’ve been eyeing our setup in Google and concluded that we’re spending about 10K per year on hosting. It works great, but it’s a lot, and we’d prefer to give ourselves a little raise instead of donating to Google. So, I embarked on the plan to migrate to Hetzner.

But I used Codex to do it. We also have a second setup that, for customer reasons, runs in Telekom Cloud, which is basically an OpenStack-based environment. I already had a lot of Ansible scripts to provision that.

I started by telling Codex to refactor and modernize that codebase and set up a new inventory for my brand-new Hetzner setup. I created a few VMs in Hetzner, a private network, and a load balancer. One of the VMs acts as a bastion so you can SSH into it to reach the other ones that don’t have a public IP address.

In small steps, I fixed, upgraded, and modernized the Ansible scripts, using the new Hetzner setup as the test bed. I let Codex do all the work. I got it to fix the Ansible code and drive the provisioning through the tools on my laptop and over SSH. I set up skills and guardrails around the process.

When the Ansible scripts failed, I got it to debug why and implement fixes. I got it to research workarounds. A lot of this was me guiding it along the way. I was leaning on my 2+ decades of experience, but I did not touch a single line of code.

This is actually key. As you go, you will see it struggle and figure things out. When that happens, you just ask it to record what it just did as a skill. You read over the markdown it generates, suggest further improvements, and the next time it does better. This feedback loop between you and the AI is super important. It’s the difference between doing things your way, i.e. the right way, and doing it in whatever way it dreams up. Knowing what the right way is, is an experience-driven thing. And this is now our added value as software makers.

After 3 days, I switched over the DNS and shut down the last of our Google VMs. That’s 3 days from creating the Hetzner account, not counting the extra day I spent getting them to raise my VM limit because I was still in my first month.

Doing rolling restarts with Elasticsearch

Part of our setup is Elasticsearch. If you’ve never used it, it’s a clustered search engine that achieves high availability through node and cluster coordination. To upgrade it, you have to do a so-called rolling restart. You can’t just yank the plug and upgrade all nodes because that would make the cluster unavailable and cause all sorts of issues with split brains, data loss, and worse. This is exactly the kind of thing you normally want an experienced DevOps person to handle responsibly. DevOps is full of tasks where experience matters, because the wrong shortcut can quietly turn into an outage or data loss.

The XXX Elasticsearch cluster upgrade is complete.

All three nodes are now on 9.3.1:

es1 9.3.1
es2 9.3.1
es3 9.3.1
Final cluster state:

green
3 nodes / 3 data nodes
0 relocating shards
0 initializing shards
0 unassigned shards
Notable preflight/result details:

Latest 9.3.x release verified: 9.3.1
Snapshot health passed before rollout; latest successful snapshot was 2026_03_12_05_33
System feature migration was required and completed before the first node restart
I did the rollout one node at a time in this order:

es3
es2
es1
No code changes were needed in the repo.

This morning, I asked Codex to reprovision the cluster in our old data center. I last did this in September, manually. It first did a quick inventory check and a dry run on one of the nodes. It verified the cluster was in a healthy state (green) and that the latest snapshot was verified. Then it went ahead and executed the rollout. It followed the plan to the letter. The block above is the report it gave me when it finished. The whole thing took about 20 minutes. I was on standby giving permission the few times it came back to ask for permission for the next step, as my skill for this specifies.

Before doing this, I iterated with it on writing a skill for doing this as part of my Hetzner migration. The skill covers what an experienced DevOps person would normally do: preflight checks before kickoff, confirmation gates that ask me for permission, and guidance for “what if this happens” scenarios. There’s plenty of advice on the internet, and I even wrote about this exact topic years ago in Running Elasticsearch in a Docker 1.12 Swarm. Writing blog articles was something I used to do more regularly as a way of saying, “I should remember this weird thing I just spent 2 days figuring out so I don’t have to spend that time again.” It’s the pre-AI way of creating and recording skills.

If you are interested, you can find the skill file I used here.

What’s next?

Over the past few weeks, I’ve been planning and executing a ridiculous amount of work. I’ve launched two new websites via Cloudflare. I’ve created a few new OSS projects. I’ve done some major surgery on our two live deployments. I’ve also shipped a few major new features. Somehow, I have also found some time to try out OpenClaw and play with some new AI stuff. I’ve compressed months of work into a few weeks. I’m not going to lie: I’m exhausted, but I’m also energized. This is crazy fun.

Next on my agenda for modernizing DevOps bullshit that I don’t want to deal with is getting some world-class AI monitoring and alerting in place. I need telemetry, logging, and all the rest. I have some of that already, but having it and actually using it are two different things. I want an AI to handle the operational discipline part: checking uptimes, verifying backups, watching resource usage, and making sure everything is working as it should. I want it to give me daily reports, summarize what matters, and escalate issues. I don’t want to take a sabbatical to set all this up manually. I just want to get this shit done.

If this sounds like something your team needs

One of the other things I did with Codex recently was launch our AI services and consulting site: formationxyz.com.

The pitch is simple. A lot of companies can see the opportunity with AI, but they struggle to turn that into practical systems, useful workflows, and actual leverage for their teams. That is exactly the gap we want to help close.

At FORMATION XYZ, we help small teams automate repetitive work, build practical AI systems, and put agentic workflows in place that reduce manual effort and create more capacity. If the kind of work I described above sounds interesting to you, and you want help applying AI inside your company in a pragmatic way, we can help.

The Impact and Achievements of AI4SE in 2025

AI is reshaping how software is built, tested, and taught. In AI for Software Engineering (AI4SE), that transformation is engineered on purpose. A research partnership with Delft University of Technology (TU Delft), AI4SE brings together leading labs, industry-grade tools, and five dedicated research tracks to turn advances in AI into practical gains for developers and learners worldwide.

AI4SE members

In this post, we will tell you about the last year’s achievements, broken down by track.

AI4SE overview

“This collaboration between the top IDE developer and a top European uni is super exciting.”

Ziyou Li

AI4SE was launched in late 2023. Five PhD students and their supervisors from TU Delft work together with JetBrains researchers, as well as BSc and MSc students, on topics exploring AI in software engineering. The topics fall within the following five research tracks:

  1. Testing and Evaluating LLMs and SWE Agents

This track explores how autonomous AI agents, leveraging the reasoning capabilities of large language models (LLMs) and multi-agent orchestration, assist developers in coding, testing, and automating workflows. The research in this track seeks to maintain the safety, robustness, reliability, and long-term maintainability of LLM-powered agents.

  1. Large Language Model Adaptation for Coding Tasks

This track’s goal is to adapt and personalize LLMs for different IDE users’ coding tasks, evaluating the LLMs’ emerging capabilities and ensuring that the outputs are timely, safe, and relevant. The research in this track aims to overcome issues such as those associated with massive training data from public domains and reliance on one model only.

  1. Interactive and Aligned IDEs in the LLM Era

This track aims to build simple, non-disruptive tools that bring AI-powered code generation and explanation directly into developers’ existing IDE workflows, going beyond chat-based interfaces to make everyday coding easier and more productive.

  1. Utilizing Runtime Information to Improve Development Processes

Mid-2025,  this track’s team decided to pivot to focus more on agentic and multi-agentic systems to explore how dynamic analysis techniques can assist in AI engineering and AI observability.

  1. Intelligent Teaching Assistant in Programming Education

This track aims to build a smart AI teaching assistant that gives students personalized, context-aware help with programming—from better code generation and tailored hints to clear, metaphor-based explanations and custom learning materials—so they can understand concepts more easily and reach their learning goals more effectively.

Several of our teams participate: Software Testing Research, HAX, Dynamic Program Analysis, Education Research; alongside teams from the TU Delft side: AISE and CISE research labs from the Software Engineering Research Group (more details on who is involved can be found on the AI4SE People page). The program additionally is part of the Innovation Center for Artificial Intelligence (ICAI). 

AI4SE in 2025: Highlights

“Being here at one of the well-established and leading “for developers” companies in the industry during this rapid time of change comes with some chaos, but also with opportunity to do work which will influence the state of development worldwide.”

Sergey Datskiv

In this section, we want to showcase specific achievements of AI4SE in 2025. We begin with a student project that has significant impact, then present milestones of our graduate students, followed by individual highlights per track: tools and plugins, product support, and standout publications (our running list of publications can be found here).

Impact showcase

“It was both challenging and rewarding to train ML models on JetBrains’ production-scale data. With the help of some talented engineers, we rolled out trigger models which now save 20-30% of code completion inference.”

Aral de Moor

We would like to highlight a research project that has made a significant impact far beyond the research environment. Namely, Aral de Moor’s work with Maliheh Izadi, Arie van Deursen, and Sergey Titov on trigger models originated as Aral’s student project, and has been very successful in its application since then. 

Aral and his team built a machine learning model that uses code context and telemetry data to predict the optimal moment to trigger code completion, boosting developer productivity. They studied real-world interactions to finetune the model so that it filters out completion requests that are unlikely to be accepted. This model is able to avoid generating completions about a third of the time with non-significant impact on other completion metrics – while significantly saving inference cost. While originally available for Kotlin only, Aral and his team has rolled out this feature for every programming language to all JetBrains IDEs.

Already at AIware 2024, the paper on this work won the Distinguished Paper Award, and Aral’s newer paper about the models has been accepted to the IDE Workshop co-located at ICSE 2026.

2025 MSc graduates

“In this past year, it’s been exciting to have seen and ultimately become a part of the transition from Fleet into the foundation for a new product that is centered around the flow of delegating coding tasks to different agents.”

Nadine Kuo

Six MSc students who worked on their theses as JetBrains interns have graduated in 2025. Here are the names of the MS students, with their thesis titles and links, available for you to read:

  • Arnav Chopra, Building Better Programmers: An AI System for Guided Program Decomposition
  • Sergey Datskiv, Prompt, seed, generate: Seeding for test case generator with LLMs
  • Casper Robert Dekeling, Comparing the hint quality of a Small Language Model and a Large Language Model in automatic hint generation
  • Milan de Koning, Metamorphic testing for LLM-based code repair
  • Nadine Kuo, Proactive AI in IDEs
  • Saga Rut Sunnevudóttir, The Status of JavaScript Test Generation: A Benchmark-Based Evaluation

Four of these students (Sergey, Milan, Nadine, and Saga) have started working full-time at JetBrains since graduation. We featured their journey in posts this summer (parts I and II).

PhD Students who passed the Go/NoGo milestone

Three PhD students successfully passed their Go/NoGo in 2025. This is an important milestone in PhD programs in the Netherlands, although details differ across institutions and faculties in when and how it occurs. While the student is regularly meeting with their supervisor in the time leading up to the Go/NoGo meeting, this meeting is a more formal one, and it requires the student to submit a project plan with expected findings.  At the meeting, a committee evaluates whether the PhD student is likely to successfully complete their thesis in time and then shares recommendations for the student, as well their decision as Go, or ‘continue the student’s project’, or NoGo, or ‘terminate the student’s project prematurely’.

These students who successfully passed the Go/NoGo milestone and their thesis projects are:

  • Daniele Cipollone, Model Adaptation to Coding Tasks
  • Ziyou Li, Interactive and Aligned Agentic IDE
  • Yuri Noviello, Designing and Evaluating AI-Generated Visual Analogies for Computing Education

Publications and other achievements

“The cool thing about JetBrains in general and AI4SE is that it’s so multi-faceted that it’s really easy to come up with cross-disciplinary ideas and actually get to work on them.”

Roham Koohestani

Here are some of the most important of AI4SE researchers’ achievements in 2025:

Track 1: Testing and Evaluation of LLMs and SWE Agents

  • Milan de Koning has been working on data leakage, or when a model sees parts of the test data during training. Specifically, he studies how metamorphic testing, which changes code without altering its meaning, can reveal when models rely on memorization rather than true understanding, and applies this to AI agents.

Track 2: Large Language Model Adaptation for Coding Tasks

  • Daniele Cipollone developed TreeRanker, a fast and architecture-agnostic approach using a token-aware ranking system for code completion. In addition to the ranking method, this project introduces a new dataset for evaluating completion ranking based on the Long Code Arena benchmark. Daniele presented TreeRanker in the industry track of ASE 2025, gave the Doctoral Keynote at the Code Completion Challenge (part of ASE 2025), gave a talk on integrating LLMs in IDEs at the doctoral symposium of the FSE 2025 conference, and one on code vulnerabilities and automating their detection at the LLM4Code workshop at ICSE 2025.

Track 3: Interactive and Aligned IDEs in the LLM Era

  • Nadine Kuo’s work on a new agentic development environment has made it possible for developers to spin off multiple tasks to async agents, whether in isolated containers, separate worktrees, or soon in the cloud. Recently, she has been working on support for Claude agent-related features, including hooks and skills to provide developers with more fine-grained control over how their agent fits into their day-to-day workflows. This work was accepted to ACM IUI 2026 as a full paper.
  • Ziyou Li collaborated with Agnia Sergeyuk on Prompt-with-Me, a prompt library that turns scattered prompts from IDE users into a clean, reusable, and context-aware in-IDE prompt library. He developed the prototype and tested Prompt-with-Me with a dozen developers in different industries with successful results. Their paper detailing this was presented in the industry track of ASE 2025.  
  • Further work by Ziyou concerned a high-level design of an agent-enabled IDE and the roadmap to implement different aspects of it; the paper was presented at the the FSE 2025 doctoral symposium. Ziyou also presented a short paper with Maliheh Izadi at the International Workshop on Envisioning the AI-Augmented Software Development Life Cycle of the FSE 2025 conference. This paper proposes a mediator agent to interface between the developer, the IDE and its tools, agentic tools, and external systems.
  • Agnia’s work with Maliheh on the human-AI experience in IDEs was accepted to the Empirical Software Engineering journal at the end of last year and published in early January 2026. On top of that, their work on developers’ needs with respect to AI assistants in IDEs was accepted for the Software Engineering in Practice (SEIP)  track at ICSE 2026.
  • Roham Koohestani has been working with Maliheh on projects which resulted in two papers, presented at ICSE 2025 and at FSE 2025: The first proposes hyper-dimensional vector spaces to model human-computer Interaction, focusing on user actions, stylistic preferences, and project context. The second introduces HyperSeq, or Hyper-Adaptive Representation for Predictive Sequencing of States, a novel, resource-efficient approach designed to model developers’ cognitive states.

Track Crossover: 3 and 4

  • Roham has been collaborating with Ateş Görpelioğlu on AgentGuard, a framework for runtime verification of agentic AI systems, which he presented at the AgenticSE workshop held at ASE 2025. His work on AgentGuard has sparked a cross-track collaboration between Tracks 3 and 4. The project also initiated discussions with the Koog team to explore potential integration of AgentGuard into the Koog framework.

Track 4: Utilizing Runtime Information to Improve Development Processes

  • Zahra Seyedghorban, in collaboration with Yelizaveta Brus (MSc student, UWaterloo, REBELs research group), worked on the Test Error Grouping for Asgard project to investigate how crash deduplication techniques can help cluster similar test failures. They adapted two state-of-the-art approaches: (1) FaST, a term-based method that aligns stack traces to measure lexical similarity, and (2) BERTopic, an embedding-based topic-modeling approach that captures semantic similarity in failure description.
  • Ateş identified issues related to tool-calling functionality when using OpenRouter models within the Koog framework. He also discovered missing capabilities in Koog’s OpenTelemetry support and contributed to resolving these issues by collaborating closely with engineers from JetBrains.
  • On automated testing of microservice architectures, Delano Flipse, Hakan Simsek, Jeremie Decouchant, and Burcu Kulahcioglu Ozkan’s paper “​Automated Network-Level Fault Injection Testing of Microservice Architectures was accepted to the research track of ICSE’26. The method models the system’s resilience behaviors dynamically through the observed test executions and uses this information to generate the set of fault combinations to explore.
  • Burcu gave the conference keynote talk, From Formal Methods to Testing of Distributed Systems, at FORTE’25, the 45th International Conference on Formal Techniques for Distributed Objects, Components, and Systems.

Track 5: Intelligent Teaching Assistant in Programming Education

  • Gosia Migut, Anastasia Birillo, and Yuri Noviello have worked on AI-generated metaphors, a tool which extracts coding concepts from task descriptions. It generates visual and text-based metaphors to explain these concepts to students.

Looking Ahead in 2026

“Happy to be part of the AI revolution in software engineering with JetBrains!”

daniele

Daniele Cipollone

AI4SE turned ambitious ideas into real 2025 impact. From smarter testing and code completion to agentic IDEs and teaching aids, our researchers delivered real tools evaluated and backed by peer-reviewed research. With new graduates and a growing network of collaborators, the program is entering 2026 as a proven foundation for fostering the growth of emerging researchers and as a solid engine for reshaping how software is built and learned.

If you are a TU Delft student interested in joining AI4SE, contact Mitchell Olsthoorn for general questions about the thesis procedures, or reach out to the university track leads to learn about project opportunities.