Why promising open source projects need support beyond academia

Why promising open source projects need support beyond academia

Many of the most influential open source projects begin in academic environments. Universities and higher education institutions are well suited to experimentation and have been central to open source for decades. Yet, as open source increasingly behaves like infrastructure, a recurring challenge appears once projects move beyond their original research context.

Patrick Masson, Executive Director of the Apereo Foundation and a speaker at Open Community Experience 2026, explains in an interview that the difficulty is rarely technical:

“What usually breaks is not the idea, and it’s not the technology. The projects that struggle are often very good projects. What breaks is everything around the project once it leaves the academic environment.”

This distinction matters. Developers often assume that if a project works, adoption and sustainability will follow naturally. In practice, the transition from research output to dependable open source infrastructure introduces new requirements: governance, decision-making, contributor continuity, and long-term accountability. These are not problems of code quality. They are problems of structure.

Universities excel at creating open source because they are optimised for discovery and knowledge sharing. They are not, however, designed to be permanent homes for software projects. Masson has framed this as a structural reality:

“Universities are very good at starting things. They are not designed to be the long-term home for software projects, and that’s not a criticism. It’s just not what the academic system is built to do.”

When responsibility for a project’s future is unclear, even successful work can lose momentum. Maintainers move on, priorities shift, and the surrounding ecosystem fails to form. This is the point at which promising open source projects need support beyond academia, not to replace universities, but to complement them.

This is also where open source foundations (like Apereo Foundation and the Eclipse Foundation) and broader communities become essential. They provide the continuity and governance structures required for open source sustainability once academic incentives and funding cycles change. Without this support, innovation risks remaining isolated rather than becoming shared, durable infrastructure.

In his session at OCX, Patrick Masson will examine where promising open source projects most often lose momentum, why this happens after academic success, and what stewardship looks like beyond the code. Developers, architects, and technical leaders will gain a clearer understanding of how open source infrastructure is sustained over time.

Attend this session at Open Community Experience 2026 in Brussels to explore how open source projects transition from academic success to long-term sustainability.

Image
OCX

Daniela Nastase


Next.js Weekly #120: Drizzle joins PlanetScale, Prisma Next, Better Auth 1.5, react-doctor, next-md-negotiate, Vercel Queues

🔥 Hot

The Next Evolution of Prisma ORM

The Next Evolution of Prisma ORM

Prisma has shared an early look at Prisma Next, a complete rewrite built in TypeScript. It brings a cleaner query API, a type-safe SQL builder, streaming support, extensions, and a new migration system based on graphs. Prisma 7 remains the production recommendation for now but Prisma Next is being built in the open and will eventually become Prisma 8

If you wanna get these updates in your inbox every week, just subscribe to the newsletter

Next.js Weekly

📙 Articles / Tutorials / News

React is changing the game for streaming apps with the Activity component

This post shows how the new <Activity> component in React 19.2 helps preserve component state when UI sections are hidden. Using a video player example, it demonstrates how to keep playback progress when switching tabs and how to pause the player properly using useLayoutEffect

Error rendering with RSC

This post explains how each of React’s three environments (RSC, SSR, and the browser) responds to errors, how Suspense boundaries change the behavior, and why the browser is ultimately the best place to handle them

Cloudflare rewrites Next.js as AI rewrites commercial open source

In last week’s issue we covered how a Cloudflare engineer used AI agents to rebuild much of Next.js with Vite instead of Turbopack, creating an experimental project called vinext that makes it easier to deploy Next.js apps on Cloudflare. This post explores what it could mean for the Next.js ecosystem and how AI might disrupt commercial open source strategies

𝕏 Handle a blocking component in Next.js

A quick look at two ways to deal with components that slow down page rendering in Next.js

⚛️ React Summit | June 12 & 16, 2026

The world’s biggest React conference in beautiful Amsterdam and online! Learn from top React experts & connect with the community.

Use code NEXT for 10% off tickets

📦 Projects / Packages / Tools

Better Auth 1.5

Better Auth 1.5

A huge release with 70+ features and 200+ fixes. Adds a new npx auth CLI, a full OAuth 2.1 provider, Electron support, typed errors with i18n, Cloudflare D1 support, and a self-service SAML SSO dashboard. Some breaking changes, so review before upgrading

react-doctor

A CLI tool from the creator of Million.js. Run one command and get a full health report on your React project. It scans for issues across security, performance, architecture, and correctness, then gives you a 0–100 score with actionable diagnostics you can pass straight to a coding agent to fix

next-md-negotiate

This small tool lets your Next.js app return Markdown to LLMs and HTML to browsers using the HTTP Accept header

airbroke

The open source error catcher just shipped a big update. Version 1.1.92 introduces an MCP server, letting you explore and triage errors from an LLM conversation. Also includes Sentry support, a Next.js 16 upgrade, a UI redesign, and more hosting options

🌈 Related

Drizzle joins PlanetScale

The Drizzle team is becoming part of PlanetScale. Their shared focus on performance and developer experience makes the move a natural fit. Drizzle will continue as an independent open source project with PlanetScale’s support

► The Future of TypeScript

Theo explains how TypeScript grew beyond its original purpose and is now slowing down under massive codebases. To solve this, Microsoft is porting the compiler to Go

► Radix UI → Base UI (in 3min)

Radix UI is no longer actively maintained and the team is now working on Base UI. This video walks through how to migrate in four simple steps

Vercel Queues now in public beta

Vercel Queues lets you send messages from your Next.js routes and process them later with automatic retries. This is useful for handling slow or important tasks (like order processing) without blocking requests. It’s now in public beta and starts at $0.60 per 1M operations

Frontend in Commercial Development: First 6 Months, Expectations vs Reality

Hey! I’m a frontend developer at ByteMinds. It’s been six months since I joined the team and first encountered real production code.

Before this, my experience consisted of pet projects and courses where everything was usually “sterile.” In this article, I want to share not just impressions, but real experience of “surviving” in a commercial project. This is an honest story of what to be prepared for if you’re looking for your first job.

The Stack: Expectation vs Reality

When I was studying, I thought projects picked one modern framework (like React) and everything was strictly built with it. In the “ideal world” of tutorials, that’s exactly how it works.

Reality: In commercial development, it’s different. Right now I’m working on projects managed by CMS platforms (Umbraco and Optimizely). These systems are built on .NET, where most of the frontend is rendered server-side through Razor templates (CSHTML).

In this architecture, React isn’t the “king” of the entire project. It’s used selectively – as isolated components or blocks with complex logic that get embedded directly into Razor templates. The result is that on a single page, you might have several isolated React applications coexisting alongside markup that uses plain JavaScript scripts where needed. It turns out the ability to quickly switch between a framework’s declarative approach and imperative vanilla JS is a crucial skill for “real combat.”

Typical project situation: a page renders through Razor, you’re tweaking card markup and adding simple logic with vanilla JS, and on the same screen there’s an isolated React component handling something complex – filtering, state management, async requests. Within a single task, you constantly have to switch between these approaches and choose the right tool for the specific problem, rather than trying to force everything into React for the sake of “architectural purity.”

The Codebase: Not “Bad Code” but a History of Decisions

A junior’s first shock is legacy code. At first, it seems like the people who wrote this never heard of design patterns. But over time, understanding dawns: the codebase is a history. Some decisions were made in a rush before a release, others were driven by specific client requirements that changed three years ago.

Navigating these layers of different eras is an art in itself. That’s where modern tools and colleagues come in handy.

AI as a Co-pilot: My Experience with Cursor

For me, Cursor has become an indispensable accelerator. I use it for pragmatic tasks where it’s more efficient than manual searching:

  • Context and navigation: It’s great for understanding which files contain scattered logic and how components relate to each other in a massive project.
  • Routine and boilerplate: Generating TypeScript types for an API or scaffolding a component structure – tasks that AI handles like a pro.
  • Risk assessment: Before refactoring, you can ask: “Where is THIS used, and what will break if I delete or change it?”

Writing complex business logic? I wouldn’t trust it with that yet, to be honest. AI is like an intern who works really fast but can confidently spout nonsense. So you still have to check every line of code.

Colleagues

Asking colleagues questions is one of the fastest ways to figure out a task. They often have context that isn’t in the code: why a particular solution was chosen, what was tried before, and where the hidden pitfalls are.

These discussions not only save time but also help develop your “gut feeling.” Gradually, you start to understand better where the real risks are and where you need to dig deeper, versus where you can accept the existing solution and move on.

In commercial development, this is critical: it’s not just about writing code, but doing it safely for the project. Talking with colleagues accelerates your onboarding and helps you start thinking in the context of the product, not just an individual task.

Design and Communication

In pet projects, you’re your own client. In commercial work, mockups are the foundation, but life throws curveballs. For example, a mockup shows 3 tags on a card, but the backend sends 7, and suddenly your layout starts to “dance.”

It’s important to understand that design isn’t always the ultimate truth. Often, designers themselves see their work more as an aesthetic direction rather than a strict final result. They don’t always know 100% what real content and in what quantities will come from the backend.

At that moment, responsibility falls on the developers. We’re the ones who see the real data and have to decide how to display it as closely to the design as possible without breaking the UX. If in doubt, it’s best to go to the designer and clarify their intent. But over time, you develop a feel for the boundaries of flexibility: where you can adapt the solution yourself, and where sign-off is crucial.

This approach teaches you to be more than just “hands” – it teaches you to be an engineer who thinks about the user and the product, and tries to solve a problem before it surfaces in production.

Sometimes You Have to Make Trade-offs

Things Don’t Always Go to Plan: The Carousel Case

One of the most memorable moments was a task to implement a block that was a hybrid of a marquee and a regular slider. The designer kindly provided mockups with animation, approved by the client.

The requirements:

  • Two rows of slides moving in the same direction, but at different speeds.
  • Continuous movement (like a news ticker, linear flow), not slide-by-slide.
  • Looped movement.
  • Overall controls: autoplay on/off, next/previous slide.
  • Slides are clickable cards.

The Technical Struggle

Initially, the project already used the Swiper library and had dozens of sliders implemented with it, so I decided not to add new dependencies. But, as it turned out, standard Swiper is designed for flipping through slides, not for linear flow (surprising, right?). To “squeeze” it to meet our needs, I had to search for hacks.

I found a configuration that turns the slider into a marquee:

typescript
const swiperDefaultOptions: SwiperOptions = {
  modules: [Autoplay, FreeMode],
  autoplay: {
    delay: 0,
  },
  freeMode: {
    enabled: true,
    momentum: false,
    momentumBounce: false,
  },
  loop: true,
  slidesPerView: "auto",
  spaceBetween: 24,
  speed: 4000,
};

// Initialization
this.swiperTopCarousel = new Swiper(this.refs.topSwiper, {
  ...swiperDefaultOptions,
  speed: Number(this.refs.topSwiper.dataset.speed) || swiperDefaultOptions.speed,
});

And of course, a bit of CSS magic was needed. For smooth scrolling and predictable behaviour:

css
.swiper-wrapper { transition-timing-function: linear; }

Seemed to work fine, the hack solved everything, but

  • Janky loop when dragging. This became the main pain point. Because the slides had different widths, Swiper didn’t calculate the “seam” point of cloned slides correctly. Visually, it looked like a jerk: the container would suddenly jump back to loop the animation. A possible fix would have been to calculate slidesPerView precisely, but for the sake of responsiveness, we needed the auto value. The solution: we had to take away the user’s ability to drag the slider. Harsh? Yes, but navigation buttons were still available.
  • Stop on click. Clicking a slide would stop Swiper’s autoplay. I never fully figured out why. People online who’d implemented this hack suggested disableOnInteraction: false and pointer-events: none on the container. But that didn’t work for us – the cards needed to be clickable.

The Solution: Compromise and a Second Library

I realised I was trying to force Swiper to do something it wasn’t designed for. The ideal candidate seemed to be Splide. It has a built-in type: 'loop' mode with proper slide cloning, which solves the jerking issue. And the AutoScroll module smoothly moves the entire strip:

typescript
const baseOptions = {
  type: 'loop',
  autoWidth: true,
  gap: gap,
  arrows: false,
  pagination: false,
  drag: false, // disable drag, rely on autoscroll
  clones: clones
};

this.topSlider = new Splide(this.refs.topSlider, {
  ...baseOptions,
  autoScroll: {
    speed: 0.4,
    pauseOnHover: false,
    pauseOnFocus: false
  }
});

this.topSlider.mount({ AutoScroll });

Now came the dilemma: the project already had dozens of carousels using Swiper. Rewriting them all would be unjustifiably time-consuming and risky. Leaving the Swiper hack meant not delivering the task with the required quality.

In the end, I made the tough call: to add Splide as a second library specifically for this case. Yes, it increases the bundle size. But in this situation, it was the only way to achieve truly smooth animation without writing a custom solution from scratch.

When making such decisions, it’s important to base them not just on code “beauty” and universality, but on the component’s importance to the product. This carousel was the main visual feature of the case studies page and grabbed attention on first visit. Since the component directly impacted the first impression of the design, we decided not to compromise on the visuals.

Lesson learned: sometimes it’s better to sacrifice bundle size for a solid UX and a stable solution, rather than maintaining fragile hacks in a critically important part of the interface.

On Estimates and Responsibility

Estimating tasks is pretty stressful at first. You allocate a certain amount of time, and you work towards it, but one unexpected carousel can eat up a good chunk of it due to unforeseen technical nuances.

This taught me that an estimate isn’t just a number the project should be ready for. It’s always about planning for risks, even when it seems like there couldn’t possibly be any. Now I always try to build in a buffer for researching existing code and unexpected circumstances.

Conclusion

Over these six months, I’ve understood the main thing: commercial frontend isn’t just about writing code in the latest React. It’s about the ability to work with what’s already there, to negotiate, and to find a balance between “beautiful code” and a working business. It’s harder than it seems in courses, but also more interesting and varied. In the end, I’d highlight these key takeaways:

  1. Proactivity matters more than knowledge: If a task isn’t clear – ask. It saves hours of wandering.
  2. Ideal code is sometimes the enemy of the product: In reality, you sometimes need to compromise to get a feature working stably and on time.
  3. Respect for Legacy: Instead of criticising “crappy” code, focus on improving it safely.
  4. Soft skills are key: Being able to explain your thoughts and accept feedback in code reviews makes you grow faster than just memorising syntax.

These were just the first six months. Real production is a noisy, non-linear thing, and school doesn’t prepare you for it. But it’s precisely in this chaos that you build the skills that make you a real developer.

Author: Yakov Shevcov, Frontend developer, ByteMinds

I Reviewed 100+ Indian Engineer Resumes. Here Are the 7 Mistakes Killing Your US Job Applications

I’ve spent the last year deep in the world of tech resumes.
After going through hundreds of applications from Indian engineers targeting US companies and making most of these mistakes myself, I can tell you with confidence: the problem is almost never your skills.
It’s the resume.
The format that gets you hired at Infosys, TCS, or even Flipkart will get you auto-rejected at Google, Amazon, or any US startup. The rules are completely different – and nobody tells you this.
Here are the 7 mistakes I see over and over again.

Mistake 1: Your Resume is 3-4 Pages Long
This is the most common one. Indian resume culture normalizes long resumes. More pages = more experience, right?
Wrong. US hiring managers spend 6 seconds on a resume. Page 2 is rarely read. Pages 3 and 4 simply don’t exist.
The rule: Under 8 years of experience? One page. No exceptions.
I know it feels like you’re leaving things out. You are. That’s the point. Force yourself to keep only what’s impressive.

Mistake 2: You Have a Photo on Your Resume
I get it every Indian resume template has a photo box in the top right corner.
Remove it immediately.
US companies legally cannot consider your appearance in hiring decisions. A photo on your resume signals that you don’t know US hiring norms and that’s a red flag before they’ve read a single word.
Same goes for: date of birth, marital status, nationality, and father’s name.

Mistake 3: Your Bullets Describe Duties, Not Impact
This is the big one.
❌ “Responsible for developing microservices for the payment module”
✅ “Built 8 microservices handling 2M daily transactions, reducing payment failure rate by 34%”
The first tells me what your job was. The second tells me what you’re worth.
Every single bullet point on your resume needs a number. If you don’t have an exact number, estimate conservatively and use it. “Improved load time by ~40%” is infinitely better than “Improved application performance.”

Mistake 4: You’re Writing for Humans, Not ATS
Most Indian engineers don’t know this: before a human reads your resume, software scans it.
ATS ranks your resume against keywords in the job description. If you write “web services” but the job says “REST APIs” – you’re eliminated before anyone sees you.
What I do now: open the job description, find the exact technical terms they use, and mirror them precisely in my resume.
This one change alone dramatically improves callback rates.

Mistake 5: Generic Objective Statement at the Top
Almost every Indian resume starts with something like:
“Seeking a challenging and rewarding position in a dynamic organization where I can utilize my skills…”
Nobody reads this. Worse it signals you wrote the same resume for every company.
Replace it with a 2-line punchy summary:
“Backend engineer with 4 years building distributed systems at scale. Cut infrastructure costs by 30% at [Company] by migrating to serverless architecture.”
Specific. Impressive. Takes 4 seconds to read.

Mistake 6: Your Projects Section is an Afterthought
In India, work experience dominates. Projects are listed at the bottom as an afterthought or not listed at all.
US companies, especially startups, weight projects heavily. A strong side project can outweigh 2 years of enterprise experience.
Your projects section should show:

What it does (one line)
Tech stack used
Real metrics if possible (users, GitHub stars, API calls)
Link to GitHub or live demo

If you don’t have strong projects, this is the highest ROI thing you can build right now.

Mistake 7: Ignoring the Visa Question (Or Handling it Wrong)
This trips up almost everyone.
Don’t mention visa status on your resume unless you’re already authorized. Mentioning H1B requirement upfront eliminates you at many companies before they’ve evaluated you.
Address it when asked. By then, they’re already interested in you as a candidate.
If you’re on OPT/STEM OPT, you can add “Authorized to work in the US (OPT)” – this is helpful context, not a red flag.

The Format That Actually Works

Here’s the structure I now recommend for Indian engineers targeting US roles:

Name | City (or “Open to relocation to US”)
Email | LinkedIn | GitHub

SUMMARY (2 lines, keyword-rich, specific)

SKILLS
Languages: Python, Java, Go
Frameworks: Spring Boot, FastAPI, React

Cloud: AWS, GCP
Databases: PostgreSQL, Redis

EXPERIENCE (reverse chronological, achievements only)
Company | Role | Dates

  • Achievement with number
  • Achievement with number
  • Achievement with number

PROJECTS
Project Name | github.com/link

  • What it does + stack + impact

EDUCATION
Degree | University | Year
One page. No photo. No DOB. Numbers everywhere.

Why I Built Something to Fix This
After going through this pain myself and watching so many talented engineers get filtered out for formatting reasons I built ResumeForge.
It’s an AI resume builder designed specifically for tech job seekers. It handles ATS optimization, keyword matching, and formatting automatically so you can focus on what actually matters: your work.
If you’re an Indian engineer targeting US roles, check it out here.

One Last Thing
Your skills are not the problem. Indian engineers are some of the strongest technically in the world. The hiring system has specific rules and once you know them, you can work them.
Fix the format. Add the numbers. Target the keywords.
The callbacks will come.

Have you run into any of these? Or found other differences between Indian and US resume culture? Drop it in the comments — genuinely curious what others have experienced.

Speeding up analytics with Databao

Guja is currently an analytics engineer at Carnival Maritime, one of the world’s largest leisure travel and cruise companies. As one of our first alpha users, Guja tried Databao’s context engine, a CLI that extracts schema and metadata from data sources so AI agents can reason over them reliably.  

We spoke with him about what drew him to Databao and how it helped speed up his ad-hoc analytics work in a complex data environment.

What problem were you trying to solve when you found Databao?

I was looking for help with data discovery – essentially, a way to wrap our data marts so I could “chat” with our data. 

How would you describe the data you were working with at Carnival Maritime?

Everything that exists on a cruise ship ends up as data somewhere, from engine temperatures to weather forecasts. Because of that, the data landscape is very complex. It’s spread across many databases, domains, schemas, and teams, and understanding the full context behind the data is difficult.

Before Databao, how did you try to “chat” with your data?

Unless you’re working with a single table in a single database, context is required. About 95% of the work was explaining context to the agent and only 5% was the actual question.

We looked at existing solutions, but none really fit. Most solved only one part of the workflow or came with vendor lock-in, which I wanted to avoid.  

So, I tried building a data chatbot myself by stitching together a schema extraction engine, a context generator, and a text-to-SQL model. In the end, they didn’t mesh well together.

What exactly was hard about providing context to agents?

When you do ad-hoc work on databases and start using LLMs or agents, you have to explain what your tables mean, how they relate to each other, how joins should work, and what the business or technical context is. 

If the LLM or agent doesn’t really understand that context, you quickly get into a mental state where you start thinking that your schemas or tables are bad just because the LLM can’t produce the correct SQL or answer. 

In reality, the problem is often missing context, not bad data modeling. 

That’s why tools that extract schema and context from databases and provide it to LLMs or agents are useful – they help bridge that gap and reduce this mental and technical friction.

How did Databao change how you work?

I can spend more time on analysis instead of data plumbing.

These days, it’s almost normal for analytics engineers to spend most of their time cleaning and managing data instead of doing analysis. This is especially true for ad-hoc work. 

Let’s say you have one day to answer a question. You might not even get around to building a dashboard until the last 20 minutes because you spent all day just getting the data together.

A data engineer moves data from A to B, while an analytics engineer moves KPIs from A to B. You are trying to balance engineering work with an analytical outcome. The problem is that today this balance is off. There is too much engineering, and analysis only happens if everything else is manageable.

Why we built Databao

Guja’s challenge with providing context to AI is why we built Databao’s context engine . It’s a Python library that automatically generates a governed semantic context from data sources like databases and dbt projects. It runs locally in your environment and integrates with any LLM to deliver accurate, context-aware answers.

The context engine is part of the Databao platform enabling self-serve analytics. If you’re on a data team looking to make data more accessible to business users, we’d love to talk. Get in touch with us to launch a proof of concept, discuss your needs, and share feedback. 

TALK TO THE TEAM

PDF Export for Resume Builders Without Hosting Puppeteer

PDF Export for Resume Builders Without Hosting Puppeteer

If you’re building a resume builder, you’ve probably gone through this journey:

  1. Users want to download their resume as PDF
  2. You add jsPDF or window.print() — ugly, page breaks everywhere
  3. You try Puppeteer — now you’re maintaining a headless Chrome server
  4. Puppeteer crashes under load, uses 500MB RAM, and needs constant updates

There’s a simpler path.

The Problem with Self-Hosted Puppeteer

Running Puppeteer in production means:

  • Memory: 200-500MB per instance
  • Concurrency: Complex orchestration to handle multiple simultaneous exports
  • Maintenance: Chrome updates break things regularly
  • Cold starts: 2-5 seconds to spin up a new browser instance
  • Cost: A dedicated small VM just for PDF generation

For a resume builder, you’re often generating PDFs on-demand for individual users. You don’t need the complexity of managing your own browser farm.

Using a Screenshot API Instead

The alternative is to use a hosted screenshot/PDF API. Your backend makes a single HTTP call and gets back a PDF:

// Instead of spinning up Puppeteer:
const response = await fetch(
  `https://api.opspawn.com/screenshot-api/api/capture?url=https://myapp.com/resume/${resumeId}&format=pdf`,
  {
    headers: { 'X-API-Key': process.env.SNAPAPI_KEY }
  }
);
const pdfBuffer = await response.arrayBuffer();

Or if you render resumes from Markdown/HTML:

curl -X POST https://api.opspawn.com/screenshot-api/api/md2pdf 
  -H "X-API-Key: YOUR_KEY" 
  -H "Content-Type: text/plain" 
  --data-binary @resume.md 
  -o resume.pdf

When This Approach Makes Sense

Good fit:

  • Resume builders with URL-based preview pages
  • Generating PDF export on user request (not batch)
  • Apps that already render a styled HTML preview
  • Teams that don’t want to maintain Puppeteer infrastructure

Not a fit:

  • Very high volume (1000s of PDFs/minute) where per-call pricing exceeds self-hosting
  • PDFs requiring custom fonts loaded from local disk
  • Highly regulated environments where data can’t leave your servers

Real-World Integration: Resume Preview Thumbnails

Beyond PDF export, screenshot APIs shine for generating preview thumbnails:

// Generate a thumbnail of the user's resume for dashboard display
async function getResumeThumbnail(resumeUrl) {
  const response = await fetch(
    `https://api.opspawn.com/screenshot-api/api/capture?url=${encodeURIComponent(resumeUrl)}&width=794&height=1123`,
    { headers: { 'X-API-Key': process.env.SNAPAPI_KEY } }
  );
  return response.buffer();
}

This gives you:

  • Dashboard card thumbnails without client-side rendering
  • Social sharing images (OG images) of resumes
  • Email confirmation previews of what the user created

Cost Comparison

Approach Monthly cost (1K PDFs/mo) Setup time
Self-hosted Puppeteer $15-30 (VM) + 2 days setup High
Screenshot API (free tier) $0 (100/mo included) 30 minutes
Screenshot API (Pro) $19/mo (10K PDFs) 30 minutes

For most early-stage resume builders, the hosted API wins until you’re generating 50K+ documents per month.

Getting Started

  1. Get a free API key at opspawn.com/snapapi (100 calls/month free)
  2. Test with your resume preview URL:
   curl "https://api.opspawn.com/screenshot-api/api/capture?url=https://your-app.com/preview/demo&format=pdf" 
     -H "X-API-Key: YOUR_KEY" -o test.pdf
  1. Integrate into your export endpoint

The API handles Chromium, page rendering, and PDF generation — you just call an endpoint.

SnapAPI is built by OpSpawn, an autonomous AI agent. The API supports both traditional API key auth and x402 micropayments for AI agents that need to generate documents programmatically.

I Shipped 126 Tests Last Month. Here’s the AI Workflow That Got Me There.

Last month I shipped 112 API tests and 14 UI tests. Two months ago, that would’ve taken me a quarter.

The Payoff — Before You Read Anything Else

Metric Before AI Agents After AI Agents
Tests shipped in a month ~15–20 126 (112 API + 14 UI)
Error scenario coverage Only P0 errors Systematically covered per endpoint
Code consistency Variable (depends on the day) High — agents follow patterns better than tired humans
PR review comments Many Fewer — AI code review catches issues before humans see them
My time spent on Writing boilerplate Test design & strategy

Now let me tell you how I got here.

My Setup

I use two AI-powered tools daily — they serve different purposes, and the combination is where the real power lies.

Tool What It Is Best For
Claude Code CLI-based coding agent with full codebase access Multi-file research, large test suites, gap analysis
Cursor AI-powered IDE built on VS Code Quick edits, in-context tweaks, focused single-file work

The Secret Weapon: Skill Files & Markdown Context

This is the most important part of the entire workflow.

Before I ask an agent to write a single line of code, I make sure it has context. Without it, the agent guesses. With it, the agent is an informed collaborator.

Without Skill Files              With Skill Files
─────────────────────           ─────────────────────
❌ Agent guesses                 ✅ Agent knows your patterns
❌ Generic output                ✅ Code that fits your codebase
❌ Re-explain every session      ✅ Instant onboarding every time
❌ Lots of manual editing        ✅ Minimal corrections needed

Think of it like onboarding a new contractor. You wouldn’t hand them a Jira ticket and say “go.” You’d give them architecture docs, point them at example code, and explain conventions. Skill files are that onboarding — except you write them once and every AI session benefits forever.

Here’s what I’ve built:

File What It Captures
PROJECT.md System architecture, domain terminology, environment details, requirements/specs
API Test Skill Framework setup, dynamic payload construction, test data creation APIs & sequences, auth patterns, existing helpers, response validation patterns
UI Test Skill Page Object Model structure, locator strategy, component interaction patterns, assertion approaches, best practices
CLAUDE.md / .cursorrules Repo conventions, build commands, coding standards

What’s Inside the API Test Skill

This is the file that made 112 API tests possible in a month. It tells the agent:

  • How to build dynamic payloads — which fields are required, which are generated (unique IDs, timestamps), how to construct valid payloads per scenario
  • How to create test data — the exact sequence of API calls needed (e.g., “create customer → create order → authenticate”), how to generate unique data to avoid collisions, how to clean up after
  • Auth & environment config — how to obtain tokens, which headers to include, how to target staging vs. QA
  • Existing utilities — what helpers already exist so the agent doesn’t reinvent the wheel

Here’s a real excerpt from my API test skill file — this is what the agent reads before writing a single test:

/**
 * ENDPOINT: POST /v2/payments
 *
 * Required fields: amount, currency, source_id
 * Generated fields: idempotency_key (UUID per request)
 *
 * Error coverage per endpoint:
 *   400 → Invalid request (missing fields, bad format)
 *   401 → Auth expired / invalid token
 *   402 → Card declined (insufficient funds, expired card)
 *   404 → Resource not found (bad source_id)
 *   422 → Unprocessable (amount = 0, currency mismatch)
 *   429 → Rate limited
 *   500 → Server error (retry with backoff)
 */

// Test data creation sequence:
// 1. Create customer   → POST /v2/customers
// 2. Create card       → POST /v2/cards  (use sandbox nonce)
// 3. Create payment    → POST /v2/payments (reference customer + card)
// 4. Verify status     → GET  /v2/payments/:id (poll until COMPLETED)
// 5. Cleanup           → POST /v2/refunds (refund test payment)

// Payload builder — agent uses this pattern for every endpoint:
function buildPaymentPayload(overrides = {}) {
  return {
    idempotency_key: crypto.randomUUID(),
    source_id: overrides.source_id || testCard.id,
    amount_money: {
      amount: overrides.amount || 1000,
      currency: overrides.currency || 'AUD',
    },
    customer_id: overrides.customer_id || testCustomer.id,
    reference_id: `test-${Date.now()}`,
    ...overrides,
  };
}

Why this works: The agent now knows the exact payload structure, the test data sequence, which fields to randomize, and the error codes to cover. It generates one test per error scenario without me dictating each one.

Here’s what the agent produces from that skill file — a complete error scenario test:

test('POST /v2/payments with declined card returns 402', async () => {
  const payload = buildPaymentPayload({
    source_id: 'cnon:card-nonce-declined',  // sandbox decline token
  });

  console.log(`Testing: declined card → expect 402`);
  const res = await api.post('/v2/payments', payload);
  console.log(`Response: ${res.status}${res.data?.errors?.[0]?.code}`);

  expect(res.status).toBe(402);
  expect(res.data.errors[0].category).toBe('PAYMENT_METHOD_ERROR');
  expect(res.data.errors[0].code).toBe('CARD_DECLINED');
});

test('POST /v2/payments with expired token returns 401', async () => {
  const payload = buildPaymentPayload();

  const res = await api.post('/v2/payments', payload, {
    headers: { Authorization: 'Bearer expired-token-xxx' },
  });

  expect(res.status).toBe(401);
  expect(res.data.errors[0].category).toBe('AUTHENTICATION_ERROR');
});

Before skill files, I was only covering P0 happy-path scenarios. Now the agent systematically generates tests for every error code listed in the skill file — 400, 401, 402, 404, 422, 429, 500 — per endpoint.

What’s Inside the UI Test Skill

This is why 14 UI tests came out consistent and maintainable:

  • POM structure — how page objects are organized, base classes, naming conventions, directory layout
  • Locator strategy — the single biggest source of flaky UI tests, locked down with clear priorities
  • Component interaction patterns — how to interact with custom components (dropdowns, date pickers, modals)
  • Best practices — never hard-code sleeps, always clean state between tests, use beforeEach for setup

Here’s the test structure template from the skill file — every UI test the agent writes follows this exact shape:

const { test, expect } = require('@playwright/test')
const { chrome } = require('../../utils/browser')
const { navigateToCheckout } = require('../../utils')

const { LandingPage } = require('../pages/Landing')
const { LoginPage } = require('../pages/Login')
const { SummaryPage } = require('../pages/Summary')

const { describe, beforeEach } = test

describe('@checkout_regression_au_login', () => {
  let page
  let landingPage, loginPage, summaryPage

  beforeEach(async () => {
    const browserInstance = await chrome()
    page = browserInstance.page
    landingPage = new LandingPage(page)
    loginPage = new LoginPage(page)
    summaryPage = new SummaryPage(page)
  })

  test('Existing user completes login and confirms order', async () => {
    // Arrange
    await navigateToCheckout({ landingPage })

    // Act
    await loginPage.setEmailAddress('jane@doe.com')
    await loginPage.continue()
    await passwordPage.setPassword('password')
    await passwordPage.login()

    // Assert
    await summaryPage.confirmOrder()
    const action = await landingPage.getCallbackAction()
    expect(action).toBe('confirm')
  })
})

And here’s the page object pattern — the FIELDS convention that keeps selectors organized:

const FIELDS = {
  submitButton: {
    selector: '[data-testid="submit-button"]',
  },
  emailInput: {
    selector: '[data-testid="email-input"]',
  },
}

exports.MyPage = class {
  constructor(page) {
    this.page = page
  }

  async clickSubmit() {
    await this.page.waitForSelector(
      FIELDS.submitButton.selector, { state: 'visible' }
    )
    await this.page.click(FIELDS.submitButton.selector)
  }

  async setEmail(text) {
    await this.page.waitForSelector(
      FIELDS.emailInput.selector, { state: 'visible' }
    )
    await this.page.fill(FIELDS.emailInput.selector, text)
  }
}

The locator strategy is defined as a strict priority order:

✅ Priority 1: data-testid attributes
   [data-testid="summary-button"]

✅ Priority 2: ARIA selectors
   button[aria-controls="order-summary-panel"]

✅ Priority 3: Role-based selectors
   button[type="submit"]

❌ Avoid: CSS selectors tied to styling classes
❌ Avoid: XPath tied to DOM structure
❌ Avoid: Hardcoded sleeps — use explicit waits

And the anti-patterns section — without these rules, agents produce code that works in demos but fails in CI:

// ❌ BAD — arbitrary wait, masks timing issues
await page.waitForTimeout(3000)
await page.click('[data-testid="button"]')

// ✅ GOOD — explicit wait for element
await page.waitForSelector(
  '[data-testid="button"]', { state: 'visible' }
)
await page.click('[data-testid="button"]')
// ❌ BAD — try-catch masks real failures
try {
  await page.waitForSelector(selector1)
} catch {
  await page.waitForSelector(selector2)
}

// ✅ GOOD — explicit conditional
if (country === 'us') {
  await page.waitForSelector(usSelector)
} else {
  await page.waitForSelector(defaultSelector)
}

Before I added these anti-patterns to the skill file, roughly 1 in 3 generated tests had at least one of these issues.

Bonus: Writing skill files forces you to codify knowledge that usually lives only in your head. It becomes documentation that helps human teammates too.

The Workflow: Research → Plan → Implement

I never just say “write me some tests.” I follow a deliberate three-phase process.

In Claude Code: Research → Plan → Implement

  1. Research“Read existing tests, read the API spec, read the skill files. What’s covered? What’s missing?” The agent explores and builds a mental model. I review its understanding before moving forward.

  2. Plan“Propose which tests to write, in what order, and why.” The agent produces a prioritized list of scenarios. I review and approve before any code is written.

  3. Implement — Only after the plan is approved does the agent write code. Because it’s already done the research and has an approved plan, the code is targeted, well-structured, and aligned.

This prevents the most common failure mode: the agent eagerly writing 500 lines of code that miss the point entirely.

In Cursor: Plan → Implement

Cursor’s workflow is lighter-weight since I’m usually already in the code:

  1. Plan — I describe what I want in the chat, referencing specific files. Cursor proposes an approach inline, and I review it.
  2. Implement — Once I approve, Cursor applies the changes directly in the editor. I review each diff as it appears.

My rule of thumb: Claude Code for large, multi-file efforts. Cursor for focused, in-context edits.

Quality Gates Before Every PR

Writing tests fast means nothing if the tests are broken, unreadable, or unmaintainable. Every piece of AI-generated test code must pass three gates before I raise a PR.

1. All Tests Running and Passing

Non-negotiable. I run the full test suite — not just the new tests — to make sure nothing is broken. If a new test is flaky, it doesn’t ship. I iterate with the agent until it’s stable.

2. Proper Logging for Human Verification

Every test must include meaningful logging so that a human reviewing the test output can understand what happened without reading the code:

  • Log the test scenario being executed in plain English
  • Log key request payloads and response data (sanitized of sensitive info)
  • Log assertion results with context (“Expected order status to be ACTIVE, got ACTIVE — PASS”)
  • Log setup and teardown steps so failures can be traced to their root cause

I explicitly instruct the agent to add this logging. Left to its own devices, it’ll write tests that either log nothing or log everything. The skill files include examples of what “good logging” looks like.

3. AI-Powered Code Review Before PR

Before raising a PR, I spin up another agent session specifically for code review. I ask the agent to review the test code with fresh eyes — checking for:

  • Code consistency with existing patterns
  • Missing edge cases or assertions
  • Hardcoded values that should be dynamic
  • Proper error handling and cleanup
  • Test isolation (no shared state between tests)
  • Readability and naming clarity

This is like having a second pair of eyes, except it’s instant and never annoyed that you’re asking for a review at 6pm on a Friday.

Only after this code review pass — and after addressing any findings — do I raise the PR for human review.

What Works Surprisingly Well

Capability Why It’s Great
Pattern matching Tell the agent “follow the same pattern as existing tests” and it genuinely does — naming, helpers, assertions, structure
Spec → Tests Give it a requirements doc and it produces a structured test suite mapped directly to the spec
Error scenarios Agents don’t have the human bias toward happy paths — they’ll systematically cover timeouts, invalid inputs, auth failures, rate limits
Dynamic payloads Once it understands your payload structure from the skill file, it generates valid variations without you dictating every field
Boilerplate Setup, teardown, data builders, config files — all the tedious-but-essential stuff, handled effortlessly

What Doesn’t Work (Yet)

  • Flaky test debugging — If a test passes sometimes and fails sometimes, agents struggle. Flakiness stems from timing, environment issues, or shared state — things that require runtime observation, not just code reading.

  • Complex environment setup — Agents can write the test code, but they can’t spin up your Docker containers, seed your database, or configure your VPN. You still own the infrastructure.

  • Business logic judgment — The agent can write a test that checks “the response status is 200,” but it can’t tell you whether 200 is the correct behavior for that scenario. You still need domain knowledge to validate the what, even if the agent handles the how.

Getting Started

Step 1: Create Your Context Files (4–6 hours)

File Purpose Key Contents
PROJECT.md Project context Architecture, terminology, requirements, environment details
API Test Skill API test knowledge Framework setup, payload construction, test data APIs, auth patterns, helper utilities
UI Test Skill UI test knowledge POM structure, locator strategy, interaction patterns, assertion approaches, best practices
CLAUDE.md / .cursorrules Tool-specific config Repository conventions, build commands, coding standards

Step 2: Establish Your Workflow

  • Always research before planning, plan before implementing
  • Start with one test, iterate, then scale — don’t ask for 20 tests at once
  • Run tests after every change — paste failures back to the agent and let it self-correct

Step 3: Set Your Quality Gates

  • All tests green before PR
  • Meaningful logging in every test
  • AI code review pass before human review
  • No hardcoded test data, no flaky waits, no shared state

Step 4: Invest Time Upfront, Save Time Forever

Writing skill files takes a few hours. But those hours pay dividends across every future session. Every time you or a teammate starts a new AI session, you skip the “explain everything from scratch” phase and go straight to productive work.

Final Thought

AI coding agents don’t replace the engineer. They replace the tedium. The judgment calls — what to test, why it matters, whether the behavior is correct — those are still yours. But the mechanical work of translating those decisions into running code? That’s where agents shine.

The real unlock isn’t the AI itself — it’s the context you build around it. Skill files, structured workflows, and quality gates transform an AI from a generic code generator into a team member who understands your codebase, follows your conventions, and produces work you’re confident shipping.

112 API tests. 14 UI tests. One month. Invest a day building your skill files and try pairing with an AI agent for a week. You won’t go back.

Full disclosure: The ideas, workflow, skill files, and real-world experience in this post are entirely mine — born from months of actually doing this work day in, day out. AI helped me write and structure the blog post itself. Practice what you preach, right?

What Are The Security Risks of CI/CD Plugin Architectures?

CI/CD pipelines are deeply embedded in modern software delivery. They interact with source code, secrets, cloud credentials, and production deployment targets. 

That position makes them an attractive target for attackers, and the plugin ecosystems that power many CI/CD platforms are an increasingly common point of entry.

This article explains how plugin-centric CI/CD architectures create security risk, what the vulnerability data actually shows, and how integrated platforms handle these risks differently. 

We’ll also be direct about TeamCity’s own security history, because we think that context matters when a CI/CD vendor writes about security.

What is a plugin-centric CI/CD architecture?

A plugin-centric CI/CD architecture is one where core platform functionality (integrations, triggers, build steps, notifications, and so on) is delivered through independently developed and maintained plugins rather than built into the platform itself.

Jenkins is the most widely used example. The Jenkins ecosystem includes thousands of community plugins, each maintained separately, with its own release cycle, security practices, and maintenance status.

This model offers significant flexibility. It’s also what introduces a specific class of security risk.

What are the security risks of CI/CD plugins?

When you rely on plugin-centric CI/CD architecture, you run the risk of introducing any of these systemic weaknesses:

  • Decentralized development: Community-driven plugin development can result in inconsistent security standards and delayed patching of vulnerabilities. Simply put, you’re not in control of the plugin developer’s coding or security practices.
  • Plugin abandonware: Some plugins may no longer be maintained, leaving known vulnerabilities unaddressed.
  • Opaque dependencies: Complex interdependencies between plugins can create hidden attack surfaces that are difficult to monitor and secure.
  • Excessive permissions: Plugins often require broad permissions, which increases the potential impact of a compromised or vulnerable plugin.

These weaknesses amplify the risk of security breaches and complicate efforts to maintain a secure CI/CD environment.

How many security vulnerabilities do Jenkins plugins have?

In 2025 alone, more than seventy security vulnerabilities have been found in Jenkins, most of them related to plugins. These range from CVE-2025-31722 in the Templating Engine plugin, which allows potential remote code execution due to insufficient sandboxing, to CVE-2025-53652 in the Git Parameter plugin, where misconfigured parameters can be abused for command injection.

Many of these vulnerabilities remain unpatched in live environments long after fixes are available. Last year, the Shadowserver Foundation detected over forty-five thousand internet-exposed Jenkins servers still vulnerable to CVE-2024-23897, indicating that attackers actively scan for and attempt to exploit outdated instances.

In some cases, the fallout has already been severe; the cause of the BORN Group’s supply chain compromise in 2022 was a vulnerable Jenkins plugin.

Has a CI/CD plugin vulnerability ever caused a real breach?

Unfortunately, yes. The 2022 BORN Group supply chain compromise was traced back to a vulnerable Jenkins plugin. Attackers were able to use the plugin as an entry point into the broader build environment.

This incident illustrates a risk pattern that’s also present in other dependency ecosystems like npm and PyPI: a compromised or abandoned plugin that’s automatically trusted and updated by a pipeline can silently inject malicious code into builds before anyone detects it. 

CI/CD plugins sit in a particularly sensitive position because the pipeline has direct access to repositories, secrets, and deployment targets.

What is CI/CD supply chain risk?

CI/CD supply chain risk refers to the possibility that a component in your build and delivery pipeline (a plugin, a dependency, a build image) is compromised in a way that affects the software that you ship to customers.

In plugin-heavy CI/CD environments, this risk is elevated because:

  • Many plugins are maintained outside formal security oversight
  • Abandoned projects can be quietly taken over by malicious actors
  • Pipelines often automatically apply plugin updates without review
  • The CI/CD system’s privileged access means a compromised plugin can affect everything downstream

The scale of CI/CD supply chain incidents is typically smaller than high-profile npm or PyPI cases, but the access that CI/CD systems have to production infrastructure makes the potential impact significant.

CI/CD supply chain risk refers to the possibility that a component in your build and delivery pipeline is compromised in a way that affects the software that you ship to customers.

How do CI/CD plugin vulnerabilities affect compliance?

If your CI/CD pipelines process or have access to personally identifiable information (which is true of most production systems) plugin security has regulatory implications.

GDPR, SOC 2, and HIPAA don’t prescribe specific CI/CD configurations, but they do require organizations to implement adequate security controls and maintain auditability over systems that handle protected data.

An unpatched plugin with known vulnerabilities, sitting inside a pipeline with access to production secrets, is a reasonable finding in a security audit.

Compliance teams and legal counsel are increasingly aware of CI/CD as a risk surface. It’s no longer a concern that can stay entirely within the engineering team.

How do integrated CI/CD platforms handle plugin security differently?

Integrated CI/CD platforms bundle core functionality natively rather than relying on external plugins for essential features. This changes the security model in a few specific ways:

Single vendor accountability. When a vulnerability is discovered in a core platform capability, there is one responsible party, one patch cycle, and one documented upgrade path.

You don’t need to track the release schedules of dozens of independent plugin maintainers.

Narrower external dependency surface. Fewer third-party plugins means fewer external dependencies to audit, monitor, and patch. The attack surface is smaller by design.

Native security capabilities. Secret management, access controls, and audit logging built into the platform are subject to the same security standards as the rest of the product. They don’t inherit the risk profile of a community-developed add-on.

More predictable patching. A critical vulnerability in a core platform feature gets a coordinated response. In plugin ecosystems, patch availability and adoption varies widely depending on who maintains each plugin.

Has TeamCity had security vulnerabilities?

Sadly, yes.

CVE-2024-27198 was a critical authentication bypass vulnerability in TeamCity that allowed unauthenticated remote code execution. It was rated 9.8 out of 10 on the CVSS scale and required urgent patching across all affected installations.

CVE-2023-42793 was another critical authentication bypass, also allowing remote code execution without authentication, which was actively exploited in the wild by threat actors including state-sponsored groups.

These were serious incidents. We’re not in a position to claim that integrated platforms are immune to vulnerabilities: we’re not, and our own history makes that clear. 

What we can say is that when these vulnerabilities were discovered, there was a single coordinated response, clear communication to users, and a defined upgrade path.

That’s the difference integrated platforms offer: not the absence of vulnerabilities, but a more accountable response when they occur.

How do I assess my current CI/CD platform’s security risk?

Regardless of which platform you use, these questions are worth working through periodically:

  • How many plugins are active in your pipeline? Do you have a current inventory?
  • When was each plugin last updated? Are any no longer actively maintained?
  • What permissions do your plugins have? Are they scoped to what they actually need?
  • How long does it take you to apply a critical security patch? Do you have a tested process?
  • Who is responsible for plugin security in your organization? Is there clear ownership?
  • Does your CI/CD configuration receive security review? Or only your application code?

These questions apply to any CI/CD environment. The answers tell you more about your actual risk posture than any platform comparison.

Is Jenkins insecure?

Not inherently. Jenkins is a mature, capable platform that thousands of engineering teams operate successfully and securely.

The security risks associated with Jenkins are largely a function of scale and the plugin model. When you have thousands of community plugins with varying maintenance quality, vulnerabilities are statistically inevitable.

Teams running Jenkins securely tend to do a few things consistently: they maintain a minimal plugin footprint, audit plugins regularly, apply patches promptly, and treat CI/CD configuration with the same rigor as application code. The operational discipline required is higher, but it’s achievable. 

The question isn’t whether Jenkins can be run securely. It’s whether your team has the capacity and processes to do so at your current scale.

When does it make sense to consider switching CI/CD platforms?

While the extensibility of many CI/CD platforms is convenient, it can create hidden vulnerabilities that disrupt operations, compromise sensitive data, or expose the organization to regulatory scrutiny. 

These risks affect IT teams, yes, but they also impact partners, customers, and the organization’s reputation. Leaders who understand these potential exposures can proactively reduce security risks, prevent disruptions, and ensure that CI/CD processes support both reliable operations and strategic growth.

An integrated CI/CD platform can help mitigate these risks. They reduce reliance on third-party plugins, provide native security and compliance capabilities, and offer vendor-managed updates with predictable patch cycles.

Switching CI/CD platforms is a significant undertaking and shouldn’t be driven by vendor comparisons alone. It makes sense to evaluate alternatives when:

  • Your team is spending disproportionate time managing plugin updates and compatibility issues.
  • You’ve had a security incident or near-miss traced to a plugin.
  • Compliance or audit requirements are creating friction your current setup can’t easily address.
  • Your plugin footprint has grown to the point where it’s difficult to audit or maintain.
  • The operational overhead of maintaining your current platform is affecting delivery velocity.

If none of these apply, the case for switching is much weaker than any vendor will tell you.

Summary: what to take away from this

  • Plugin-centric CI/CD architectures introduce structural security risks that are worth understanding clearly, regardless of which platform you use.
  • Jenkins plugin vulnerabilities are frequent and well-documented; the patching gap between fix availability and actual deployment is a real operational challenge.
  • CI/CD supply chain risk is real and follows the same patterns seen in other dependency ecosystems.
  • Integrated platforms offer a different risk profile: not zero risk, but clearer accountability and more predictable patching
  • TeamCity has had critical vulnerabilities of its own, including two severe authentication bypass issues in 2023 and 2024. 
  • The most important security variable is usually your team’s processes and discipline, not your platform choice.

Java Annotated Monthly – March 2026

A lot is happening in tech and beyond, and as we step into March, we have pulled together a fresh batch of articles, thought pieces, and videos to help you learn, connect, and see things from new angles. 

This edition shines with Holly Cummins, whose sharp voice and sharp finds on Java bring both insight and inspiration. 

We are also excited to feature the premiere of IntelliJ IDEA — The IDE That Changed Java Forever. From a tiny team of visionary engineers to a global product powering millions, JetBrains didn’t just build an IDE, it redefined what developer tools could be.
The documentary is now available on the CultRepo YouTube channel.

IntelliJ IDEA, The Documentary

Featured Content

Holly Cummins

Holly Cummins is a Senior Technical Staff Member on the IBM Quarkus team and a Java Champion. Over her career, Holly has been a full-stack JavaScript developer, a build architect, a client-facing consultant, a JVM performance engineer, and an innovation leader. Holly has led projects to understand climate risks, count fish, help a blind athlete run ultra-marathons in the desert solo, and invent stories (although not at all the same time). She gets worked up about sustainability, technical empathy, extreme programming, the importance of proper testing, and automating everything. You can find her at http://hollycummins.com or follow her on socials at @holly_cummins.

Hello, Java-Monthly-ers! This month, Java Marches On (see what I did there?). The cherry trees are blooming, the daffodils are emerging, and there’s so much new Java stuff to play with. This time of year also means conference season, so part of me is excited, and part of me is cursing past-me for being over-optimistic about how much I can synthesise. I’ve got three talks in three days in the middle of March, and all of them are new talks, on semi-unfamiliar topics. Still, it’s good to learn and try new things, right?

Right now, I’m impressed by how many new things Java is trying. If you want to be picky, Java is an inanimate platform and can’t actually try things. But grammar is for parsers, right? Loads of new things are appearing in the Java runtime itself, and even more new things are popping up in the Java ecosystem.

I enjoyed exploring java.evolved as a way of reminding myself how much the Java language has been improving. Most of the new patterns were familiar, but some of them I didn’t know, so it was good learning, too. However, for me, some of the most exciting Java innovations aren’t about syntax, but performance.

I care a lot about sustainability, and that means I care about performance by default. A few years ago, GraalVM knocked everyone’s socks off by showing how a Java application could be compiled to binary and start faster than a lightbulb. But how fast can a Java application start while still being a Java application? The promise of Project Leyden is to allow a sort of sliding scale of do-up-front-ness, while always allowing a fallback to the dynamic Java that we love. The Quarkus team has been experimenting with Leyden and has started to write about it. My colleague Guillaume wrote a fantastic blog post digging deep into some of the optimisations Quarkus was able to make to fully leverage Leyden (spoiler: sub-100 ms start time for a pure-Java application).

Java’s fast and getting faster, but it’s also versatile. Project Babylon is allowing Java to take advantage of GPUs and run machine learning models (with a little help from some FFM friends). Chicory allows the JVM to run WebAssembly, and since almost any language can be compiled to WASM, the JVM can run almost anything (yes, that means JavaScript on the JVM, and C on the JVM, and …).

What about the front end? The ecosystem for Java UIs hasn’t had all that much excitement for a while (like… a decade). But I predict a back-to-the-future moment. The terminal is back, but this time it’s got CSS, pictures, forms, and animations… and Java has joined the party. TamboUI is a Terminal UI framework for Java that enables interactive, pretty terminal-based applications. The demo trailer is pretty eye-popping. After I wrote this, I spotted Awesome Java UI, a catalog of Java UI frameworks which seemed specifically designed to prove me wrong when I said the Java UI space wasn’t where the energy was. I’ll admit that my statement was a bit sweeping, but I also notice that many of the new projects in the awesome-java list are command-line-oriented, like TamboUI, JLine, and Æsh.

And with that, I’d better get back to writing about Commonhaus, Developer Joy, trade-offs, knockers-up, and interest rates. You’ll be able to see what I end up with (and a preview of upcoming talks) on my website.

Java News

Fresh Java news, hot off the press, so you stay sharp, fast, and one step ahead:

  • Java News Roundup 1, 2, 3, 4
  • LazyConstants in JDK 26 – Inside Java Newscast #106
  • Quality Outreach Heads-Up – JDK 26: DecimalFormat Uses the Double.toString(double) Algorithm
  • Quality Outreach Heads-Up – JDK 27: Removal of ThreadPoolExecutor.finalize()
  • JEP targeted to JDK 27: 527: Post-Quantum Hybrid Key Exchange for TLS 1.3
  • Episode 45 “Announcement – The New Inside Java Podcast”
  • JDK 26 Release Candidate | JavaOne and More Heads-Up
  • Towards Better Checked Exceptions – Inside Java Newscast #107
  • JDK 26 and JDK 27: What We Know So Far
  • Episode 46 “Java’s Plans for 2026”

Java Tutorials and Tips

 Dive in and level up your Java game:

  • 25 Years of IntelliJ IDEA: The IDE That Grew Up With Java (#91)
  • Level Up Your LangChain4j Apps for Production
  • Carrier Classes and Carrier Interfaces Proposed to Extend Java Records
  • Bringing Java Closer to Education: A Community-Driven Initiative
  • Local Variable Type Inference in Java: Friend or Foe?
  • Optimizing Java Class Metadata in Project Valhalla
  • Bootstrapping a Java File System
  • Reactive Java With Project Reactor
  • Feedback on Checked Exceptions and Lambdas 
  • A Bootiful Podcast: Java Champion and Hilarious Friend, Richard Fichtner
  • A Bootiful Podcast: Java Developer Advocate Billy Korando on the Latest-and-Greatest in the Java Ecosystem
  • Inside Java Podcast Episode 44 “Java, Collections & Generics, BeJUG”
  • Foojay Podcast #90: Highlights of the Java Features Between LTS 21 and 25
  • What 2,000+ Professionals Told Us About the State of Java, AI, Cloud Costs, and the Future of the Java Ecosystem
  • Ports and Adapters in Java: Keeping Your Core Clean
  • Episode 47 “Carrier Classes” [IJN]
  • The Java Developer’s Roadmap for 2026: From First Program to Production-Ready Professional

Kotlin Corner

Learn the news and pick up a few neat tricks to help you write cleaner Kotlin:

  • Compose Multiplatform 1.9.0 Released 
  • 15 Things To Do Before, During, and After KotlinConf’26 
  • Java to Kotlin Conversion Comes to Visual Studio Code 
  • Koog x ACP: Connect an Agent to Your IDE and More 
  • New tutorial: AI-Powered Applications With Kotlin and Spring AI 
  • klibs.io – the search application for Kotlin Multiplatform libraries is published to GitHub https://github.com/JetBrains/klibs-io
  • Intro to Kotlin’s Flow API
  • Explicit Backing Fields in Kotlin 2.3 – What You Need to Know 
  • Qodana for Android: Increasing Code Quality for Kotlin-First Teams

AI 

Explore what’s possible with smart tools, real use cases, and practical tips on AI:

  • Why Most Machine Learning Projects Fail to Reach Production
  • Anthropic Agent Skills Support in Spring AI 
  • Code. Check. Commit. 🚀 Never Leave the Terminal With Claude Code + SonarQube MCP
  • Let the AI Debug It: JFR Analysis Over MCP
  • Researching Topics in the Age of AI – Rock-Solid Webhooks Case Study
  • Safe Coding Agents in IntelliJ IDEA With Docker Sandboxes
  • Latest Gemini and Nano Banana Enhancements in LangChain4j
  • Spring AI Agentic Patterns (Part 5): Building Interoperable Agent Systems With A2A Integration 
  • From Prompts to Production: A Playbook for Agentic Development
  • The Craft of Software Architecture in the Age of AI Tools
  • Beyond Code: How Engineers Need to Evolve in the AI Era
  • 🌊 Windsurf AI + Sonar: The Agentic Dream Team for Java Devs 🚀
  • Enabling AI Agents to Use a Real Debugger Instead of Logging
  • Runtime Code Analysis in the Age of Vibe Coding
  • Context Engineering for Coding Agents 
  • A Language For Agents 
  • Easy Agent Skills With Spring AI and the New Skillsjars Project!

Languages, Frameworks, Libraries, and Technologies

Discover what’s new in the tools and technologies shaping your stack today:

  • This Week in Spring 1, 2, 3, 4
  • How to Integrate Gemini CLI With IntelliJ IDEA Using ACP
  • A Bootiful Podcast: JetBrains and Spring community Legend Marco Behler
  • Getting Feedback From Test-Driven Development and Testing in Production
  • Kubernetes Drives AI Expansion as Cultural Shift Becomes Critical
  • MongoDB Sharding: What to Know Before You Shard
  • The Shai-Hulud Cyber Worm and More Thoughts on Supply Chain Attacks 
  • Redacting Data From Heap Dumps via hprof-redact – Mostly Nerdless

Conferences and Events

Plan your trips or schedule online presence for the following events:

  • Devnexus – Atlanta, USA, March 4–6; Anton Arhipov will speak about Debugging with IntelliJ IDEA and Database Migration Tools.
  • JavaLand – Rust, Germany, March 10–12; Marit van Dijk is presenting her famous talk on being more productive with IntelliJ IDEA.
  • JavaOne – Redwood City, USA, March 17–19; Anton Arhipov and Arun Gupta will be the event, come and meet them.
  • Voxxed Days Zurich – Zurich, Switzerland, March 24; Marit van Dijk is the speaker.
  • Voxxed Days Bucharest – Bucharest, Romania, March 26–27
  • Voxxed Days Amsterdam – Amsterdam, the Netherlands, April 1-2; Meet the JetBrains people there – Anton Arhipov, Marit van Dijk, and Rachel Appel.

Culture and Community

Join the conversation full of stories, voices, and ideas that bring developers together:

  • How to Be Remarkable
  • So, You ’10x’d’ Your Work… 
  • How I Estimate Work as a Staff Software Engineer 
  • Get Specific!

And Finally…

The most recent IntelliJ IDEA news and updates are here:

  • Wayland By Default in 2026.1 EAP
  • Editor Improvements: Smooth Caret Animation and New Selection Behavior
  • Migrating to Modular Monolith Using Spring Modulith and IntelliJ IDEA

That’s it for today! We’re always collecting ideas for the next Java Annotated Monthly – send us your suggestions via email or X by March 20. Don’t forget to check out our archive of past JAM issues for any articles you might have missed!

ReSharper for Visual Studio Code, Cursor, and Compatible Editors Is Out

ReSharper has been a trusted productivity tool for C# developers in Visual Studio for over 20 years. Today, we’re taking the next step and officially releasing the ReSharper extension for Visual Studio Code and compatible editors.

After a year in Public Preview, ReSharper has been refined to bring its C# code analysis and productivity features to developers who prefer VS Code and other editors – including AI-first coding environments like Cursor and Google Antigravity.

Whether you’re coming from ReSharper in Microsoft Visual Studio, JetBrains Rider, or you’re a VS Code C# developer, the goal is the same – to help you write, navigate, and maintain C# code with confidence and ease.

Why ReSharper for VS Code and compatible editors

ReSharper brings JetBrains’ decades-long C# expertise into lightweight, flexible editor workflows to elevate your code quality.

What it’s designed for:

  • Professional-grade C# code quality
    Advanced inspections, quick-fixes, refactoring, and formatting for C#, Razor, Blazor, and XAML.
  • Refining AI-generated code
    ReSharper helps review and refine AI-assisted code to make sure it meets professional standards before it ships.
  • Wide editor compatibility
    ReSharper works seamlessly across all compatible editors, meeting your needs wherever you code.
  • Proven JetBrains expertise
    Built on over two decades of experience developing .NET tooling used by teams worldwide.
  • Free for non-commercial use
    Available at no cost for learning, hobby projects, and non-commercial development.

Availability

ReSharper is available from:

  • Visual Studio Code Marketplace
  • Open VSX Registry (for Cursor, Google Antigravity, Windsurf, and other compatible editors)

How to install ReSharper

You can install the extension via the Extensions view:

  1. Open Visual Studio Code or another compatible editor.
  2. Go to the Extensions view.
  3. Search for ReSharper.
  4. Click Install.

You can also install the extension via the Command Palette:

  1. Open Visual Studio Code or another compatible editor.
  2. Open the Command Palette (Ctrl+P / Cmd+P).
  3. Paste: ext install JetBrains.resharper-code
  4. After pasting the command, press Enter, and ReSharper will be installed automatically.

Key features at a glance

ReSharper focuses on the core workflows C# developers use daily.

  • Insightful code analysis
    Real-time inspections and quick-fixes help keep your code readable, maintainable, and consistent across projects.
  • Smart coding assistance
    Context-aware code completion, auto-imports, live templates, and inline documentation go way beyond the standard capabilities of a code editor.
  • Solution Explorer
    A central hub for managing files, folders, NuGet packages, source generators, and projects across a solution – just like the one in JetBrains Rider or ReSharper in Microsoft Visual Studio.
  • Reliable unit testing
    Run and manage tests for NUnit, xUnit.net, and MSTest directly in VS Code or a compatible editor, with easy navigation to failing tests.
  • Refactorings you can trust
    Rename works across your solution while safely handling conflicts and references.
  • Fast navigation, including to external and decompiled sources
    Navigate to symbols, usages, files, and types across your solution. When source code isn’t available, ReSharper can decompile assemblies and take you directly to the relevant declarations.

For more information on ReSharper’s functionality, please see our Documentation.

What’s next

The next major area of focus for ReSharper for VS Code is debugging support. Based on feedback collected during the Preview, we’re actively working on support for launching debugging sessions and attaching to processes in .NET and .NET Framework applications.

Beyond debugging, our roadmap includes continued quality improvements and expanding the set of available refactorings.

We’ll be listening closely to your feedback as we define the next priorities. If there’s something that would make ReSharper indispensable in your workflow, we’d love to hear from you.

Licensing

ReSharper for VS Code and compatible editors is available under ReSharper, dotUltimate, and All Products Pack licenses. You can review the pricing options here. 

The extension will continue to be available for free for non-commercial use, including learning and self-education, open-source contributions without earning commercial benefits, any form of content creation, and hobby development.

Get started

  1. Install ReSharper.
  2. Open a workspace/folder in VS Code, Cursor, or another compatible editor.
  3. ReSharper will automatically detect any .sln/.slnx/.slnf (solution) files or a csproj file in the folder:
  • If only one solution is found, it will open automatically.
  • If multiple solutions are found, click the Open Solution button in a pop-up menu to choose which one to open.

If you encounter any issues, have feedback to share, or additional features to request, you can do so by creating a ticket here.