Hashtag Jakarta EE #318

Hashtag Jakarta EE #318

Welcome to issue number three hundred and eighteen of Hashtag Jakarta EE!

As I write this, I am in Brussels for FOSDEM’26. Stay tuned for an update from this conference in a separate post shortly. Fun fact is that I am sitting in exactly the same place in the lobby of the same hotel I stayed at for FOSDEM’19 writing this post as I did for Hashtag Jakarta EE #5. I think that it was in that moment that I realised that this would be a weekly blog series.

After FOSDEM, I am headed directly to Stockholm for Jfokus 2026. Whenever I am at Jfokus, I am hosting the Jfokus 2026 Morning Run, and this year is no exception. There is no need to sign up for it. Just show up outside the venue at 7:15 on Wednesday morning.

From the discussions in the Jakarta EE Platform call the last couple of weeks, it looks like we won’t see a release of Jakarta EE 12 on this side of summer (on the Northern Hemisphere at least). The reason is that since Jakarta EE 11 was delayed by a year, most of the vendors are currently working on their implementations. Which does not leave much resources to work on the Jakarta EE 12 specifications. At the same time, we want to play catch-up with the original plan and direction directive from the Steering Committee of the Jakarta EE Working Group to release a major release of Jakarta EE 12 about six to nine months after an LTS release of Java. So a compromise will be to release Jakarta EE 12 by the end of the 2026. The deliberations are still going on, so stay tuned on more updates.

The registration for Open Community Experience 2026 has opened. I will be presenting The Past, Present, and Future of Enterprise Java at the main stage there.

Ivar Grimstad


[D]0S – High-Fidelity Engineering: Next.js 16 + Gemini 3 + Vibe Coding with Antigravity

This is a submission for the New Year, New You Portfolio Challenge Presented by Google AI

About Me

I am David Menor. My development philosophy in 2026 is clear: the barrier between idea and execution has vanished. I am not just a developer; I am a systems orchestrator.

My portfolio is a high-fidelity data terminal that lives at the intersection of premium design and quantum AI engineering. I wanted to create an immersive experience that feels like operating a high-security mainframe, where every interaction breathes speed and precision.

Portfolio

How I Built It: The Vibe Coding Revolution

This project wasn’t “programmed” in the traditional sense; it was created through pure Vibe Coding with Google Antigravity. My process involved describing complex flows, system aesthetics, and data architectures, letting Antigravity handle the heavy lifting of implementation while I refined the vision.

Tech Stack

  • Engine: Next.js 16.1.6 (App Router) + React 19. The absolute state of the art in web performance.
  • AI Intelligence: Gemini 3 Flash acts as the site’s neural core, processing my career path and responding to visitors with deep technical context.
  • “Secure-Obsidian” Aesthetic: A visual system built on Tailwind CSS v4 and Framer Motion 12, designed to last and scale.
  • Deployment: Dockerized and launched on Google Cloud Run in minutes, leveraging native auto-scaling.

Google AI Integration: My Multi-Agent Pair Programmer

Antigravity wasn’t just another tool; it was my engineering partner every step of the way:

  1. Iteration at the Speed of Thought: I was able to jump from a design concept (like the “Sticky Stack” effect for projects) to a functional implementation in seconds. Antigravity understands the technical “vibe” I’m after — clean lines, mono fonts, tactile transitions — and translates it into production-ready code.
  2. Agentic Multitasking: While I was defining internationalization messages (next-intl), Antigravity ensured that the Google Cloud Run deployment was flawless, handling ports, containers, and environment variables without me having to touch a terminal.
  3. Aesthetic Refinement: From generating a minimalist favicon that respects the site’s identity to fine-tuning entrance animations, the AI acted as a technical art director.

What I’m Most Proud Of

I am incredibly proud of building a complex system that feels cohesive. These aren’t isolated sections; it’s a personal “Core OS”.

Achieving a Next.js 16 architecture that feels this lightweight, having Gemini AI respond with such a defined personality, and making the Cloud Run deployment transparent — all orchestrated from Antigravity — is the real testament to what being a Senior Engineer means in 2026.

Built at the speed of light. Stay Bold.

[AutoBe] achieved 100% compilation success of backend generation with “qwen3-next-80b-a3b”

[AutoBE] achieved 100% compilation success of backend generation with "qwen3-next-80b-a3b-instruct"
byu/jhnam88 inLocalLLaMA

This is an article copied from Reddit Local LLaMa channel’s article of 4 months ago written. A new shocking article may come soon.

AutoBE is an open-source project that serves as an agent capable of automatically generating backend applications through conversations with AI chatbots.

AutoBE aims to generate 100% functional backend applications, and we recently achieved 100% compilation success for backend applications even with local AI models like qwen3-next-80b-a3b (also mini models of GPTs). This represents a significant improvement over our previous attempts with qwen3-next-80b-a3b, where most projects failed to build due to compilation errors, even though we managed to generate backend applications.

  • Dark background screenshots: After AutoBE improvements
    • 100% compilation success doesn’t necessarily mean 100% runtime success
    • Shopping Mall failed due to excessive input token size
  • Light background screenshots: Before AutoBE improvements
    • Many failures occurred with gpt-4.1-mini and qwen3-next-80b-a3b
Project qwen3-next-80b-a3b-instruct openai/gpt-4.1-mini openai/gpt-4.1
To Do List Qwen3 To Do GPT 4.1-mini To Do GPT 4.1 To Do
Reddit Community Qwen3 Reddit GPT 4.1-mini Reddit GPT 4.1 Reddit
Economic Discussion Qwen3 BBS GPT 4.1-mini BBS GPT 4.1 BBS
E-Commerce Qwen3 Shopping GPT 4.1-mini Shopping GPT 4.1 Shopping

Of course, achieving 100% compilation success for backend applications generated by AutoBE does not mean that these applications are 100% safe or will run without any problems at runtime.

AutoBE-generated backend applications still don’t pass 100% of their own test programs. Sometimes AutoBE writes incorrect SQL queries, and occasionally it misinterprets complex business logic and implements something entirely different.

  • Current test function pass rate is approximately 80%
  • We expect to achieve 100% runtime success rate by the end of this year

Through this month-long experimentation and optimization with local LLMs like qwen3-next-80b-a3b, I’ve been amazed by their remarkable function calling performance and rapid development pace.

The core principle of AutoBE is not to have AI write programming code as text for backend application generation. Instead, we developed our own AutoBE-specific compiler and have AI construct its AST (Abstract Syntax Tree) structure through function calling. The AST inevitably takes on a highly complex form with countless types intertwined in unions and tree structures.

When I experimented with local LLMs earlier this year, not a single model could handle AutoBE’s AST structure. Even Qwen’s previous model, qwen3-235b-a22b, couldn’t pass through it such perfectly. The AST structures of AutoBE’s specialized compilers, such as AutoBeDatabase, AutoBeOpenApi, and AutoBeTest, acted as gatekeepers, preventing us from integrating local LLMs with AutoBE. But in just a few months, newly released local LLMs suddenly succeeded in generating these structures, completely changing the landscape.

// Example of AutoBE's AST structure
export namespace AutoBeOpenApi {
  export type IJsonSchema = 
    | IJsonSchema.IConstant
    | IJsonSchema.IBoolean
    | IJsonSchema.IInteger
    | IJsonSchema.INumber
    | IJsonSchema.IString
    | IJsonSchema.IArray
    | IJsonSchema.IObject
    | IJsonSchema.IReference
    | IJsonSchema.IOneOf
    | IJsonSchema.INull;
}
export namespace AutoBeTest {
  export type IExpression =
    | IBooleanLiteral
    | INumericLiteral
    | IStringLiteral
    | IArrayLiteralExpression
    | IObjectLiteralExpression
    | INullLiteral
    | IUndefinedKeyword
    | IIdentifier
    | IPropertyAccessExpression
    | IElementAccessExpression
    | ITypeOfExpression
    | IPrefixUnaryExpression
    | IPostfixUnaryExpression
    | IBinaryExpression
    | IArrowFunction
    | ICallExpression
    | INewExpression
    | IArrayFilterExpression
    | IArrayForEachExpression
    | IArrayMapExpression
    | IArrayRepeatExpression
    | IPickRandom
    | ISampleRandom
    | IBooleanRandom
    | IIntegerRandom
    | INumberRandom
    | IStringRandom
    | IPatternRandom
    | IFormatRandom
    | IKeywordRandom
    | IEqualPredicate
    | INotEqualPredicate
    | IConditionalPredicate
    | IErrorPredicate;
}

As an open-source developer, I send infinite praise and respect to those creating these open-source AI models. Our AutoBE team is a small project with 2 developers, and our capabilities and recognition are incomparably lower than those of LLM developers. Nevertheless, we want to contribute to the advancement of local LLMs and grow together.

To this end, we plan to develop benchmarks targeting each compiler component of AutoBE, conduct in-depth analysis of local LLMs’ function calling capabilities for complex types, and publish the results periodically. We aim to release our first benchmark in about two months, covering most commercial and open-source AI models available.

We appreciate your interest and support, and will come back with the new benchmark.

Link

  • Homepage: https://autobe.dev
  • Github: https://github.com/wrtnlabs/autobe

[AutoBe] built full-level backend applications with “qwen-next-80b-a3b” model.

[AutoBE] built full-level backend applications with "qwen3-next-80b-a3b-instruct" model.
byu/jhnam88 inLocalLLaMA

This is an article copied from Reddit Local LLaMa channel’s article of 5 months ago written. A new shocking article may come soon.

Project qwen3-next-80b-a3b-instruct openai/gpt-4.1-mini openai/gpt-4.1
To Do List Qwen3 To Do GPT 4.1-mini To Do GPT 4.1 To Do
Reddit Community Qwen3 Reddit GPT 4.1-mini Reddit GPT 4.1 Reddit
Economic Discussion Qwen3 BBS GPT 4.1-mini BBS GPT 4.1 BBS
E-Commerce Qwen3 Failed GPT 4.1-mini Shopping GPT 4.1 Shopping

The AutoBE team recently tested the qwen3-next-80b-a3b-instruct model and successfully generated three full-stack backend applications: To Do List, Reddit Community, and Economic Discussion Board.

Note: qwen3-next-80b-a3b-instruct failed during the realize phase, but this was due to our compiler development issues rather than the model itself. AutoBE improves backend development success rates by implementing AI-friendly compilers and providing compiler error feedback to AI agents.

While some compilation errors remained during API logic implementation (realize phase), these were easily fixable manually, so we consider these successful cases. There are still areas for improvement—AutoBE generates relatively few e2e test functions (the Reddit community project only has 9 e2e tests for 60 API operations)—but we expect these issues to be resolved soon.

Compared to openai/gpt-4.1-mini and openai/gpt-4.1, the qwen3-next-80b-a3b-instruct model generates fewer documents, API operations, and DTO schemas. However, in terms of cost efficiency, qwen3-next-80b-a3b-instruct is significantly more economical than the other models. As AutoBE is an open-source project, we’re particularly interested in leveraging open-source models like qwen3-next-80b-a3b-instruct for better community alignment and accessibility.

For projects that don’t require massive backend applications (like our e-commerce test case), qwen3-next-80b-a3b-instruct is an excellent choice for building full-stack backend applications with AutoBE.

We AutoBE team are actively working on fine-tuning our approach to achieve 100% success rate with qwen3-next-80b-a3b-instruct in the near future. We envision a future where backend application prototype development becomes fully automated and accessible to everyone through AI. Please stay tuned for what’s coming next!

Links

  • AutoBE GitHub Repository: https://github.com/wrtnlabs/autobe
  • Documentation: https://autobe.dev/docs