Open, Extensible, and AI-Ready: A Look Back at Eclipse Developer Tooling in 2025

Open, Extensible, and AI-Ready: A Look Back at Eclipse Developer Tooling in 2025

As we reach the end of 2025, I find myself looking back not just at milestones, but at momentum. Twelve months ago, we knew what we had to do: secure the foundations, sustain the platform, welcome new contributors, and prepare for a future defined by extensibility, openness, and AI.

The Eclipse Foundation’s three Working Groups in the developer tools space: Eclipse IDE, Open VSX, and Cloud Dev Tools, moved forward, together.Contributors showed up. Users engaged. Members committed. From meetups to releases, from restructuring to recognitions, 2025 was the year we turned intent into traction.

Let’s take a look at what shaped this year: and what it means for the road ahead.

🧭Eclipse IDE: Sustained Releases, New Contributors, and Platform Renewal

The Eclipse IDE stayed true to its cadence this year, delivering four successful Simultaneous Releases; in March, June, September and December, with project participation peaking at 270 contributors in the spring. The December and September releases held steady at 144 contributors, which is below our 200-contributor average but still represents significant commitment.

But the real shift came in community dynamics and long-term platform evolution:

 

  • In February, the IDE Steering Committee reprioritised its roadmap, identifying the need to attract new members, especially those relying on the Rich Client Platform (#RCP) and to expand contributor capacity.

Article content

  • In March, we celebrated the release of CDT 12.0, featuring a new C/C++ editing experience powered by CDT-LSP
  • All year long, Community Mentors, funded by the WG, supported newcomers through a unified entry point for all Simultaneous Relelase projects at 🔗 https://github.com/eclipse-ide Their efforts were recognised with a Paragon Award in Q4

  • One standout moment: the collaboration with CodeDay and the Computing Talent Initiative, which gave students their first opportunity to contribute to the Eclipse IDE. 🔗 https://blogs.eclipse.org/post/thomas-froment/students-eclipse-ide-contributors-growing-next-generation-committers

In parallel, 2025 marked the return of physical “Open Community Meetups” (aka “Demo camps” in the past):

  • 🇫🇷 Toulouse France, in June
  • 🇮🇳 Bengaluru India, in February and September (sold out, with 40+ on the waitlist!)

  • 🇩🇪 Germany, in July and November gatherings bringing together the Eclipse IDE community and the Java / local iJUG communities.

The GitHub Copilot plugin for Eclipse IDE reached feature parity with its counterparts on other IDEs, and is now available on the Eclipse Marketplace

This new wave of AI interest was also supported by the Eclipse community itself. Several teams began prototyping AI features for the Eclipse IDE, with early coordination happening in this Matrix space: 🔗 https://matrix.to/#/#eclipse-ide-ai-discussions:matrix.eclipse.org

Another major development: the completion of Phase 1 of Initiative 31, the effort to modernise the SWT rendering layer using Skia. The initiative showed that SWT could evolve without breaking RCP compatibility: shifting the problem from conceptual to one of available resources. 🔗 https://github.com/swt-initiative31

☁️ Eclipse Cloud Dev Tools: From Adoption to Recognition

2025 has been a turning point for the Cloud Dev Tools ecosystem, with alignment across adoption, visibility, and innovation.

  • Theia AI launched in alpha and immediately raised attention, earning coverage in The New Stack: “Theia AI: The DeepSeek of AI Tooling?” 🔗 https://blogs.eclipse.org/post/thomas-froment/ai-powered-theia-ide-open-source-developer-tools-ai-era
  • TheiaCon 2025, held in October, brought together adopters, contributors and vendors. All sessions are available online: 🔗 https://www.youtube.com/playlist?list=PLy7t4z5SYNaQyTt3QT9nddDLIuEiUKPoX
  • Theia AI won the 2025 CODiE Award for Best Open Source Developer Tool, a rare recognition from outside the open source world. 🔗 https://blogs.eclipse.org/post/thomas-froment/recognition-openness-theia-ai-receives-2025-codie-award

  • The demo video showing Claude + Theia AI integration passed 110,000 views: 🔗 https://www.youtube.com/watch?v=Rou4eiIPrK4
  • To answer the ever-present “who uses Theia?” question, we published a growing list of adopters: 🔗 https://www.eclipse.org/topics/ide/articles/the-active-ecosystem-of-eclipse-theia-adopters/

Beyond Theia, the WG celebrated other contributors: community work on Eclipse Sirius Web, EMF Cloud, Eclipse Che, GLSP, Sprotty, MBSE SysOn, without forgetting the C/C++ (CDT) Cloud Blueprint of Theia

2025 also saw Theia AI presented in diverse developer communities:

  • OSCAFest in Lagos, Nigeria: our first Eclipse Development Tools presence in West Africa

  • The Things Conference (IoT) in Amsterdam
  • DevOps Goes Native 2025 webinar series
  • The LoRaWAN Podcast of MeteoScientific: https://pod.metsci.show/episode/ai-native-toolchains-with-thomas-froment-eclipse-foundation
  • Open Source Experience 2025 in Paris

Each of these moments sparked real interest: especially when the audience realised that Theia AI offers:

  • a developer UX rivaling VS Code
  • compatibility with VS Code extensions
  • the freedom to plug in your own AI models and agents, using basically any LLM, including air-gapped setups
  • All delivered within a truly open, vendor neutral ecosystem.

🌐 Beyond Theia: a broader Cloud Dev Tools ecosystem

Beyond Theia AI, the Eclipse Cloud Dev Tools Working Group advanced several key projects in 2025, highlighting the breadth of innovation across the ecosystem, just to name a few of them:

  • Sirius Web continued to mature as a browser-based platform for diagram-centric modeling, enabling cloud-native, collaborative design workflows.
  • Langium introduced Langium AI, bringing generative AI to DSL authoring to accelerate grammar creation and onboarding.
  • Eclipse Open Collaboration Tool joined the portfolio to improve distributed collaboration through shared editing, presence, and code context.

Together, these projects show how the Working Group goes beyond IDEs to build the foundations of cloud-native, collaborative, and AI-enabled developer tooling.

📦 Open VSX: From Registry to Strategic Open Infrastructure

In 2025, Open VSX experienced explosive growth, with over 250 million extension downloads per month by November: a fivefold increase compared to 2023. The registry is now a core part of the extension ecosystem for leading AI-enhanced developer tools such as Cursor, Windsurf (Codeium), Kiro (AWS), Antigravity (Google), Bob (IBM), and widely adopted forks like VSCodium.

This growth confirms Open VSX’s role as critical infrastructure, particularly in the era of extensible, AI-augmented developer tools.

To support this evolution, the Eclipse Foundation announced a strategic partnership with AWS to help advance Open VSX as a secure and reliable cloud service. 🔗 https://blogs.eclipse.org/post/mike-milinkovich/aws-invests-strengthening-open-source-infrastructure-eclipse-foundation

2026 will mark a transition year, where Open VSX enters its product phase, with service enhancements, operational readiness, and a delivery structure appropriate to its scale. Yet at its core, it remains an Eclipse open source project: 🔗 https://github.com/eclipse/openvsx

Anyone can contribute, participate in the Open VSX Working Group, or even self-host their own private registry instance. The Working Group, far from fading into the background, will serve as an advisory board for adopters and contributors, helping shape the future of extensibility in developer tools.

It’s not just about releases or repositories. It’s about control.

In 2025, tools like Theia AI, Langium AI, Open VSX started to come together as a composable stack: a stack you can extend, inspect, deploy, and evolve.

For governments, public research, enterprise platforms, and open ecosystems, where vendor neutrality and sovereignty matter, this is no longer a “nice to have”. It’s a necessity.

🔭 Looking Ahead to 2026

We’re entering 2026 with a roadmap and renewed ambition:

  • In 2026, Open VSX will officially become a product, delivered, supported and governed like the essential infrastructure it has become. But this is only the beginning. With the scale we’ve reached and the community behind us, it’s hard to say where it will stop.
  • Eclipse IDE Working Group and its contributors will need to take even greater care of the platform’s long-term sustainability. We’ve shown what’s possible with focus and collaboration! If your organisation relies on the Eclipse Platform or RCP stack, 2026 is the year to get involved.
  • Eclipse Cloud DevTools is bridging modelling, IDE, container-based tooling and AI and the cross-pollination between desktop and cloud dev tools only getting stronger.

It’s clear: the future of developer tools will be open, extensible, and increasingly AI-native. We, at the Eclipse Foundation are proud to help build that future, with our community.

 

🙏 Thank You

To all the contributors, maintainers, mentors, Working Group members, adopters and first-time committers who made this year possible: thank you.

 

You’re shaping something that matters: and it’s only just beginning.

See you in 2026.

Thomas Froment


Oniro 2025: building the bridge, brick by brick

Oniro 2025: building the bridge, brick by brick

 

As 2025 comes to an end, it is the right time to look back at how the Oniro Working Group has evolved. If the first half of the year was about exploration and setting up our tools, the second half has been about putting those tools to the test.

We started the year with a clear mission: to be the transparent, European-governed bridge to the global OpenHarmony ecosystem. Today, we close the year with a stronger sense of direction and an evolving technical foundation. We are building something ambitious, and while the road ahead is challenging, we are well on our way.

This is the story of our 2025.

🏗️ Technical progress: building blocks for the future

The shift this year was about validation. We moved from planning to active development, prioritising continuous alignment and verifying our architecture on production-grade hardware.

📱 Hardware: The Road to the Volla Phone

One of our primary objectives is proving that Oniro can power consumer hardware. To that end, we have started the complex work of integrating Oniro into the Volla Phone X23. Adapting a multi-kernel OS to run on a production device using a mainline Linux kernel is a significant engineering challenge. We are working through the technical hurdles, and the lessons learned here are helping us build a more independent mobile alternative.

🔄 Alignment & the app ecosystem

Compatibility is not a one-time milestone; it is a continuous process. Throughout the year, our technical team has worked relentlessly to stay synchronised with upstream releases. We have currently updated our work to align with OpenHarmony 6.0 (API 20), ensuring that as the global platform evolves, Oniro evolves with it.

Simultaneously, we have been growing our application library through the Oniro4OpenHarmony project. We are moving beyond basic examples to explore more complex interactions, such as SuperDevice logic, validating the potential of distributed systems where devices share resources seamlessly.

🦀 Exploring new frameworks: Rust & Tauri

We are also exploring new horizons for developer tools. We have begun integrating Tauri, aiming to open the door for secure, memory-safe applications using Rust. This initiative aligns with our “secure by design” philosophy, with the goal of eventually enabling high-performance hybrid apps that combine the web frontend ecosystem with the robustness of a Rust backend.

🎓 Empowering developers: the Oniro tutorial series

To lower the barrier to entry, we launched the Oniro Tutorial Series, a comprehensive “Zero-to-Hero” curriculum designed to guide developers from their first setup to shipping real-world applications.

  • From setup to code: the playlist covers essential workflows, including setting up DevEco Studio and the Oniro VS Code extension.
  • Advanced scenarios: we went beyond the basics, teaching developers how to implement AI-assisted coding workflows and how to build wearable apps that access real sensor data.
  • Best practices: the series also focuses on ArkTS/ArkUI patterns, ensuring our community learns to build scalable, high-quality applications from day one.

🌍 Event highlights: a year on the road

While development continued, our team traveled to key technology hubs across Europe.

During the first half of the year, we engaged with communities at major events like FOSDEM, RUSTWeek, App.js Conf, and two OpenHarmony Technical Forums, establishing our presence in the open-source world. You can read more about those early achievements in our previous article Oniro’s Mid-Year update.

The second half of 2025 was equally active, allowing us to connect with diverse audiences and gather the feedback necessary to refine our strategy.

⚜️ OpenHarmony Technical Conference (Florence, September)

One of the most important moment of our year was the OpenHarmony Technical Conference in Florence. This event positioned Oniro as the “Global Gateway” for the ecosystem.

  • The strategic vision: as our Chair Jarosław Marek highlighted in his recent article, Oniro represents a truly European approach to openness. It balances global scale with local trust, “shattering the mobile duopoly” by offering a platform that is secure, compliant, and locally accountable.
  • Technical integration: we focused on concrete integrations with React Native, aiming to empower web developers to use their existing skills to build for Oniro.
  • The impact: it was a pivotal moment where European partners saw the full scale of the ecosystem and the structured collaboration backing it.

🇪🇸 Huawei Connect Europe (Madrid, October)

In Madrid, we shifted our focus to the industrial market. Engaging with manufacturers and integrators in the Edge Computing and IoT spaces, we had interesting conversations and received valuable feedback on the need for a vendor-neutral OS that can handle device fragmentation while remaining compliant with EU regulations.

🇮🇹 SFSCON (Bolzano, November)

Our participation at the South Tyrol Free Software Conference was a key opportunity to reinforce our message.

  • The Oniro vision: in our dedicated talks, Francesco Pham provided a technical overview of the project’s evolution, while Ignacio Ahedo discussed how Oniro’s global collaboration model aims to mitigate device fragmentation.
  • Hands-on experience: Pawel Mandes led a practical workshop, introducing developers to mobile app development on Oniro and demonstrating how accessible the platform is becoming.
  • Strategic planning: we also held our first on-site Steering Committee meeting here, finalising the 2026 Program Plan and ensuring we enter the new year with clear alignment.

🇩🇪 hackaTUM (Munich, November)

In November, we brought the #OniroChallenge to hackaTUM. It was a magnificent experience for our team: our first-time ever participating in a hackathon of this scale.

We tasked students with using our smartwatch development kit to create real-time Health & Sport applications. The results were a powerful validation of our platform’s potential. Seeing teams successfully build life-saving technology, such as real-time CPR guidance apps, in just 24 hours was inspiring. Given the success and energy, we are already looking forward to repeating this experience.

🇫🇷 Open Source Experience (Paris, December)

We concluded our tour in Paris at OSXP. Among the discussions at the Cité des Sciences, we focused on Digital Sovereignty. We engaged with the French open-source community on how a vendor-neutral OS provides the strategic independence European smart devices need. It was encouraging to see how many visitors were already familiar with our work, confirming that our message is reaching the right audience.

🚀 The path forward: key goals for 2026

With the lessons learned in 2025, the year ahead is about focus and formalisation. Based on the Program Plan approved by the Steering Committee, these are our top priorities:

📜 Specification v1.0: we will reactivate the Specification Committee to start working on release v1.0 of the Oniro Specification. This is the necessary first step toward a formal Compatibility Program, which will provide the trust commercial adopters need.

🛡️ CRA compliance: we aim to establish Oniro as a reference implementation for the Cyber Resilience Act, offering a “compliant-by-design” OS that simplifies the regulatory journey for European adopters.

🤝 Increasing collaboration: increasing our communication and deepening our cooperation with OpenHarmony, is key to ensure that our technical roadmap stays aligned and benefits from the global ecosystem’s momentum.

🤖 AI integration: there’s a plan to initiate and evaluate the impact of integrating multiple AI agents into Oniro, exploring how intelligent assistants can natively enhance the OS experience.

🌳 Expanding the ecosystem: in 2026, we will focus on specific vertical domains to create a clearer, more welcoming path for new companies to join the Working Group.

✨ A fresh identity: to better reflect our evolving scope and vision, we are planning a complete website refresh. We are also working on a new element to make the Oniro brand more recognisable and friendly to our community. Stay tuned for that reveal!

👥 A community effort

This year’s success would not have been possible without the dedication of our members, technical leads, and close friends. We extend our sincere gratitude to everyone who represented Oniro on stages across the world this year:

Jarosław Marek, Adrian O’Sullivan, Francesco Pham, Carlo Piana, Alberto Pianon, Luca Miotto, Daniel Thompson-Yvetot, Jasper Morgan, Przemysław Sosna, Kaj Grönholm, Marko Saukko, Dr. Liu Yang, Juan Rico, Pawel Mandes, Robert Radzki, Chen Song, and Ignacio Ahedo.

 

💭 Final thoughts

2025 was a year of building foundations. We aligned our roadmap, connected with the community, and started the difficult work of hardware integration.

We are proud of what we have achieved, but we remain humble before the task ahead. The bridge is being built, brick by brick, and we invite you to join us in 2026 to help lay the next stones.

Happy Holidays and a Happy New Year from the Oniro Team!

 

Image
Happy Holidays!

Ignacio Ahedo


Hashtag Jakarta EE #312

Hashtag Jakarta EE #312

Welcome to issue number three hundred and twelve of Hashtag Jakarta EE!

The Holiday season is over here, and most of us who are involved in Jakarta EE will take a break until the beginning of January. The next Jakarta EE Platform call will be on January 6. If you missed any of the calls, you can always check out the archive of meeting minutes.

There are no more conferences or events planned for the rest of this year, but my schedule for the first months of next year is starting to fill up. Check it out on my page on the Jakarta EE website. It is continuously being updated with the events I will be present at.

The videos from JakartaOne Livestream 2025 are published. Check out the playlist on YouTube.

Ivar Grimstad


Jakarta EE 2025: a year of growth, innovation, and global engagement

Jakarta EE 2025: a year of growth, innovation, and global engagement

As 2025 comes to a close, it’s a great moment to reflect on what we’ve achieved together as the Jakarta EE community. From major platform updates to refreshing the website and growing developer engagement, this year has been full of meaningful progress.

Celebrating Jakarta EE 11

One of our biggest milestones this year was Jakarta EE 11. This time we did the release in a different way: we released as soon as the profile or platforms were ready! The Core Profile was available in December 2024 and Web Profile in March 2025, and Jakarta EE Platform finalised in June 2025, reflecting the steady progress of the Jakarta EE community. Compatible products followed right away!

Jakarta EE 11 introduces the new Jakarta Data specification, delivers a modernised testing experience with updated TCK infrastructure based on JUnit 5 and Maven, and expands support for Java 21, including virtual threads. It also streamlines the platform by retiring older specifications such as Managed Beans, reinforcing Contexts and Dependency Injection (CDI) as the preferred programming model and continues to provide Java Records support.

This release marks a significant step forward in simplifying enterprise Java development, improving developer productivity, and supporting modern, cloud native applications. It’s a true reflection of the community’s collaborative efforts and ongoing commitment to innovation.

Read the Jakarta EE 11 announcement

Introducing Jakarta Agentic AI: A New Standard for Running AI Agents on Jakarta EE

This year marked the introduction of the Jakarta Agentic AI specification project. Aimed at standardising how AI agents run within Jakarta EE runtimes, this new specification will be included in a future release. Much like Jakarta Servlet unified HTTP processing and Jakarta Batch defined batch workflows, Jakarta Agentic AI will provide a clear, annotation-driven API that defines how agents are created, managed, and executed.

Built on CDI as the core component model, the specification will establish consistent lifecycle patterns and usage semantics, making it easier for developers to implement and integrate a wide range of agent types. The project also anticipates deep integration with key Jakarta EE APIs, ensuring seamless interoperability across the platform.

Jakarta Agentic AI is being developed with broad industry collaboration in mind. The project is seeking input from subject-matter experts, vendors, and API consumers both inside and outside the Java ecosystem to build the most open, portable, and future-ready agent execution model possible. Visit the project page to learn more about the specification.

Listening and learning through the Jakarta EE Developer Survey

Our annual Jakarta EE Developer Survey remains one of the best ways to track how developers and organisations are using enterprise Java and shaping their cloud strategies. In 2025, we saw a 20% increase in participation, with over 1,700 participants sharing how they use Jakarta EE in practice.

The results show continued growth and confidence in Jakarta EE across the ecosystem. Notably, even before the full platform release was finalised, 18 percent of respondents were already using Jakarta EE 11, a strong signal of interest and early adoption. 

These insights help us better understand where the community is focusing its energy, from modernising applications and adopting newer Java versions to evaluating cloud strategies and driving specification innovation. We’re grateful to everyone who participated and shared their views.

Explore the 2025 developer survey findings

Learning and contributing: A growing developer ecosystem

The Jakarta EE Learn page expanded its resources to better support developers at all levels. As part of our broader effort to support community growth, we also introduced a new Contribute page, a dedicated space that outlines how individuals and organisations can get involved with Jakarta EE.

The Contribute page highlights the many ways to participate, from writing code and improving documentation to joining specification discussions or helping with community outreach. It also explains why contributing matters, what contributors gain, and how to get started.

To further support newcomers, we launched the Jakarta EE Mentorship Program, which pairs new contributors with experienced community mentors who can provide guidance, answer questions, and help them navigate the contribution process. Whether you’re new to open source or simply new to Jakarta EE, the mentorship experience helps build skills, confidence, and deeper community connections.

Looking ahead: A refreshed web presence

Throughout 2025, our marketing team in collaboration with the Jakarta EE Marketing Committee worked on a major Jakarta EE website refresh to better reflect the clarity, maturity, and momentum of the community. While the full launch is now scheduled for early January, the homepage and navigation redesign is already complete and ready for rollout. The updated site features a bold new homepage, improved navigation through streamlined mega menus, and a new “Why Jakarta EE” section that helps visitors quickly understand the platform’s value.

This is just the beginning. Additional updates and structural improvements will continue rolling out through 2026, with a focus on enhancing messaging, navigation, and the overall user experience. Stay tuned for the official launch and more updates in the months ahead.

Global presence: virtual events, conferences, and community connections

Jakarta EE had a visible and impactful presence at face-to-face (F2F) conferences around the world, especially in the first half of the year. From Devnexus to JCON and beyond, Jakarta EE working group and community members presented talks, engaged with attendees at our sponsored booths, and built valuable relationships.

In 2025, JakartaOne Livestreams continued to grow with successful regional events in China and the annual JakartaOne Livestream, which attracted more than 6,000 viewers globally, with over 3,200 participants. With 20+ sessions, 15+ speakers, and 14+ hours of multilingual content, the JakartaOne Livestream series continued to drive strong community engagement across regions. Chinese JakartaOne Livestream recordings, as well as the annual JakartaOne Livestream recording, are available on our YouTube channel for anyone interested. 

JakartaOne F2F Meetups further expanded the program’s regional footprint, with events in China and Japan drawing 170+ registered participants and 100+ in-person attendees, supported by high community approval and strong local participation.

With 17 Jakarta EE Tech Talks delivered in 2025, the program remains a vital channel for community learning, collaboration, and inspiration. Topics ranged from microservices and containers to security and observability. Recordings of these sessions are available on our YouTube channel.

Looking forward to 2026 and beyond

As we conclude an impactful 2025, it’s clear that Jakarta EE continues to strengthen its role as the open, vendor-neutral foundation for modern enterprise Java. The progress we’ve made this year, from delivering Jakarta EE 11 and introducing new specifications like Jakarta Agentic AI, to expanding our global events and deepening community engagement, reflects the dedication, collaboration, and passion of everyone involved.

2026 promises to be another exciting year of innovation and growth.

Thank you to all members, contributors, committers, and the wider community for your continued support. Together, we’re driving the platform forward and building a vibrant, open, and innovative ecosystem.

Here’s to another year of progress, collaboration, and innovation with Jakarta EE.

Tatjana Obradovic


Eclipse SDV in 2025: All roads lead to open source

Eclipse SDV in 2025: All roads lead to open source

A year in review and a bold look ahead to 2026

By Sara Gallian and Ansgar Lindwedel

As we steer towards the end of 2025, it’s amazing to look in the rear-view mirror and see just how far the Eclipse Software Defined Vehicle (SDV) community has travelled. What started in 2022 as an ambitious concept – an open, collaborative ecosystem for in-vehicle software – has shifted from the drawing board to the open road. Together, we’re shaping not only the technological future of mobility, but also the way organisations across continents collaborate and innovate.

Image
“Close-up of a car’s side mirror on a cold day, with frost on the mirror housing. Reflected in the mirror is the ‘SDV – Eclipse Software Defined Vehicle’ logo, with blurred pink and white lights and trees in the background.

This year has been one of intense focus, deep collaboration and integration, and major milestones. And as we look toward 2026, the trajectory is clear: the SDV ecosystem is accelerating faster than ever and working towards a common open source platform and architecture for software-defined mobility, built and sustained by a global community.

This flurry of activity is mirrored in an increasing number of community members taking on important roles within the Eclipse SDV ecosystem: three newly appointed Eclipse SDV Ambassadors are helping us spread the word about our efforts both within and far beyond our own community. In addition, as the new Steering Committee Chair, Björn Reistel (ETAS), is leading the committee in setting direction, building consensus, and ensuring that the Working Group operates effectively and transparently within the Eclipse Foundation’s open governance model.

What fuels our optimism for 2026? Let’s take a look at this year’s key roadmarks of progress. 

Disclaimer: Please note that these are only a few highlights from the year, with many more activities contributing to the success of the past 12 months. For a comprehensive overview of all numbers and metrics (spoiler: the number of pull  requests has nearly doubled compared to 2024!), please see this presentation.

2025: A year of breakthroughs

MoU and Eclipse S-CORE 0.5 alpha: a turning point for our SDV platform vision

In June, the Memorandum of Understanding (MoU) made a remarkable impact across both the automotive and software industries. Initiated by the German Association of the Automotive Industry (VDA), the declaration was signed by 11 automotive companies and intentionally put open source, and Eclipse S-CORE in particular, at its core.

The release of Eclipse S-CORE 0.5 alpha and beta marked one of the most significant accomplishments of the year. Happening not even one year after the project was first announced, and only four months after its official launch, this milestone validated the architectural direction of our SDV platform and provided a tangible foundation for the upcoming 1.0 release. It also united contributors around a shared focus, proving what open innovation can achieve when the community rallies behind a common goal.

Release 0.5 delivers four core modules that lay the groundwork for future series-grade software-defined vehicles:

  • Base Libraries (Baselibs): providing common functionality, including basic logging support
     
  • Inter-Process Communication (IPC): enabling reliable and deterministic data exchange between components
     
  • Orchestration: ensuring safe execution across mixed-criticality workloads
     
  • Persistency: securing data storage across power cycles

Most development processes are undergoing external audits and are being prepared to comply with key automotive standards for quality (ASPICE), functional safety (ISO 26262), and cybersecurity (ISO/SAE 21434).

Eclipse S-CORE 0.5 also introduces an open reference platform, first showcased on QNX and Qualcomm hardware, and designed with the flexibility to support multiple operating systems and hardware targets.

Eclipse OpenSOVD: vehicle diagnostics for the masses, with massive momentum

Apart from its meteoric rise in activity and popularity, Eclipse S-CORE also sparked the creation of another new project that is on the way to becoming a center of gravity within the Eclipse SDV ecosystem: The launch of Eclipse OpenSOVD exceeded all expectations, drawing broad participation from OEMs, suppliers, and technology leaders. As OpenSOVD committer Tim Kliefoth (Mercedes-Benz Tech Innovation) explained it in an interview, the S-CORE team recognised a “market opportunity” in the absence of an open-source diagnostics stack built on the new SOVD standard:

“What prompted it was the lack of a good open source automotive diagnostic stack and the clear need for one in the industry. We saw this quite clearly within the Eclipse S-CORE project – there was a strong demand for such a solution. At the same time, there’s been a shift toward SOVD, which is still a relatively new technology compared to older systems based on UDS. So, in a way, it was the right market opportunity at the right time. Additionally, the SOVD standard is based on an ISO standard, which made it the perfect foundation for an open source project, allowing us to work closely with the standard itself and enrich it. For me, it’s really a synergy between the standard and the implementation.”

What could have been a niche initiative or side project rapidly transformed into a globally recognised effort to define and standardise interfaces for vehicle data access. The energy at the kickoff alone has already set the stage for impactful work in 2026, and we hope S-CORE will remain not only a technology platform, but also an innovation incubator that inspires new projects driven by real industry needs.

The X-CORE Platform Council: enabling the projects to focus on releasing the platform

With the establishment of the X-CORE Platform Council, an ad-hoc subcommittee of the Eclipse SDV Working Group, we are now looking ahead and turning our attention to broader integration initiatives. The Platform Council supports complex integration projects, such as Eclipse S-CORE, by overseeing key non-technical activities. While operational teams handle communications, branding, and financial matters, the council provides strategic guidance and leads decision-making to ensure effective project alignment and progress. The X-CORE Platform Council operates under a clearly defined governance structure.

In 2026, we aim to enable the Eclipse S-CORE 1.0 release and maybe even onboard additional core projects, including those emerging in areas such as commercial vehicles or advanced HMI solutions, further expanding the reach and applicability of the ecosystem.

Achievements in functional safety

Eclipse TSF: Successfully assessed to ASIL D

Functional safety also made significant strides along the road travelled in 2025: Achieving a successful ASIL D safety assessment of the Eclipse Trustable Software Framework (TSF) is an extraordinary milestone both technically and symbolically. It demonstrates that open source can meet the highest levels of functional safety, and it establishes TSF as a foundational asset for organisations building safety-critical automotive systems. As Paul Sherwood, Codethink’s Chairman is quoted in the press release:

“This assessment validates that trust in software, especially open source, can be both measurable and auditable.”

The assessment was performed by exida, a globally recognised authority in functional safety. Congratulations to our strategic member Codethink on this important step!

The Eclipse TSF project focuses on practical, scalable ways to understand and quantify risks in software engineering, especially for complex systems that span software, hardware, safety, and security. Instead of relying on traditional, error-prone documentation and requirements tools, TSF introduces a unified methodology built around Tenets and Assertions – structured statements managed directly in a Git repository – that form a directed acyclic graph linking high-level expectations to concrete evidence. 

Functional safety process released at the Eclipse Foundation

Alongside TSF, we released the Functional Safety Process at the Eclipse Foundation (it had already been announced in late 2024), giving companies a clear path for contributing to or deploying safety-relevant open source components. This is a significant step in maturing the open SDV landscape and increasing industry trust in collaborative development.

Expanding our global community 

Meetups in Korea, Japan, and the USA

Our community events this year were nothing short of exceptional and a true testament to the increasing diversification and global impact of the Eclipse SDV ecosystem. With our new event format, the Open Community Meetups, anyone in our ecosystem can host local, in-person gatherings.

  • The first-ever Eclipse SDV community meetups in South Korea and Japan attracted enthusiastic crowds and demonstrated a growing appetite for SDV innovation across Asia.
  • The second U.S. meetup in Detroit reaffirmed North America’s central role in the future of SDV.
Image
Large group photo of attendees at the Eclipse SDV Community Meetup in Yokohama, standing and kneeling in a conference room. Behind them, a presentation screen reads ‘Welcome to the Eclipse SDV Community Meetup in Yokohama hosted by Bosch,’ with the Eclipse Foundation logo displayed

The first-ever Eclipse SDV Meetup at the Bosch headquarters in Yokohama, Japan, was a great success. Photo provided by Masuda Miwako (ETAS)

 

These events showcased what makes this community unique: openness, energy, expertise, and a shared belief in shaping the future together. Special thanks to our members LG Electronics, Bosch/ETAS, and Microsoft for hosting and sponsoring these immensely successful gatherings!

Eclipse SDV Hackathon

The third Eclipse SDV Hackathon brought together more than 100 participants, for the first time across two simultaneous locations, Bosch Innovation Campus in Berlin and 42 Porto, for an energetic two-and-a-half-day sprint of creativity and collaboration. In 22 teams, supported by expert coaches from organisations including ETAS, Red Hat, Elektrobit, T-Systems, and Bosch, participants set out to design and prototype innovative features powered by software-defined vehicle technologies, drawing on a wide range of Eclipse Foundation projects such as Ankaios, Kuksa, Symphony, Velocitas, Mosquitto, ThreadX, uProtocol, Muto, Zenoh, and OpenBSW. 

After 30 hours of intensive coding balanced with moments of fun and relaxation, seven finalist teams emerged whose forward-thinking solutions impressed judges with their creativity and real-world potential. The winning application, an AI Companion, even became an Eclipse SDV Blueprint.

Image
Large group photo of participants at an Eclipse SDV hackathon event, standing together in a bright indoor venue. Many attendees wear matching Eclipse SDV hackathon sweatshirts and badges. A presentation screen and event branding are visible in the background, along with seating areas and a retro-style camper van display.

Hackathon participants at Bosch Innovation Campus in Berlin

The rise of AI in automotive

With Eclipse LMOS,an open source, cloud-native platform for building and running multi-agent systems, being presented at the SDV Community Days at Lunatech in Rotterdam in early 2025, the era of Artificial Intelligence has firmly entered the Eclipse SDV ecosystem. Only a few weeks later, Eclipse LMOS was accepted as a project under the purview of Eclipse SDV.

What’s more, we have established a new Special Interest Group (SIG) around the topic AI, with a kick-off scheduled for the beginning of next year. This marks the launch of our work on an AI-supported toolchain, a key initiative for the Eclipse Foundation as the steward of our shared infrastructure and an important area of support for SDV projects.

The AI Special Interest Group will complement our other three SIGs on Rust, ThreadX, and Automotive Processes that were already launched in 2024.

Eclipse SDV is deeply embedded in EU research projects

For us, industry needs and research that advances and strengthens the European automotive ecosystem always go hand in hand: This year, we significantly strengthened our presence in EU-funded initiatives, with both FEDERATE and HAL4SDV now in full swing and additional project proposals actively underway. Both initiatives collaborated as partners of our 2025 Eclipse SDV Hackathon.

We are also looking ahead with anticipation to the formation of ECAVA (European Connected and Autonomous Vehicle Alliance), for which we have submitted our application and are currently awaiting the outcome.

On the roadmap: 2026 will be transformational

Image
Straight, empty road stretching into the distance under a dramatic cloudy sky tinted pink and purple. Centered above the road is the ‘SDV – Eclipse Software Defined Vehicle’ logo, suggesting a forward-looking journey and future mobility.

If you’re curious about the broader trends that will shape the SDV landscape in 2026, you don’t want to miss our Eclipse SDV Ambassadors’ top predictions for the 2026 automotive and SDV sector (teaser: they share great insights on what will be the dominant architecture, key enablers, and challenges next year). Responding to these overarching developments, the Eclipse SDV community will continue along the successful roads it has already travelled, yet also venture onto new paths.

Watch for big news at CES

We don’t want to spoil the surprise, but if you’re attending CES, keep your eyes open. Major announcements will set the tone for how the SDV movement will evolve in 2026 and beyond.

Eclipse S-CORE 1.0: The platform takes center stage

With S-CORE 0.5 behind us, all eyes are now on the planned SCORE 1.0 release. This is more than a version milestone; it is the moment when the SDV platform becomes truly ready for widespread adoption and integration.

Eclipse TSF fuels a new industry safety standard

Building on the ASIL D assessment, TSF will take a leading role in shaping new safety standards for open source automotive software. The work in 2026 will heavily influence how safety-critical open ecosystems evolve globally.

International growth continues

As interest surges across Asia, Europe, and North America, we will continue expanding our footprint, reinforcing our community’s position as the world’s largest and most diverse SDV collaboration hub.

Beyond in-vehicle software: tool chain and cloud services

From 2022 to 2025, much of our effort concentrated intentionally on the in-vehicle software layer. In 2026, our scope widens.
You can expect increased activity around areas essential to delivering a truly end-to-end SDV ecosystem, including toolchains, cloud services, development workflows, and integration technologies.

Challenges and opportunities: broadening the horizon

With Eclipse S-CORE as our primary integration project, much of our community’s attention has naturally converged on enabling a strong, stable platform release for 2026. This focus has been necessary and incredibly productive.

But as we move into the next phase, our challenge and opportunity is to broaden the ecosystem once again:

  • Encouraging new integration projects, not just platform-level work
  • Expanding into cloud-oriented and tooling domains
  • Creating space for experimentation, innovation, and niche specialisation
  • Bringing in new members who recognise now is the moment to join

For organisations watching from the sidelines, the message is simple: the SDV convoy is hitting the road. Join now, or risk being left in the rear-view mirror.

2025 was a foundation. 2026 will be a leap.

This year has proven that the Eclipse SDV community is capable of delivering real, industry-shaping innovation. The groundwork is laid. The momentum is building. And the future is bright.

To everyone who contributed, collaborated, attended events, wrote code, reviewed documents, or simply cheered us on: Thank you.

2026 will be a transformative year, and we’re thrilled to embark on it together.

Happy holidays to our community, current and future Eclipse SDV members. Here’s to a

Spectacular,

Dynamic, and

Visionary

2026 …

… with open code and open roads!

Sara & Ansgar

Sara Gallian


Giving Users A Voice Through Virtual Personas

In my previous article, I explored how AI can help us create functional personas more efficiently. We looked at building personas that focus on what users are trying to accomplish rather than demographic profiles that look good on posters but rarely change design decisions.

But creating personas is only half the battle. The bigger challenge is getting those insights into the hands of people who need them, at the moment they need them.

Every day, people across your organization make decisions that affect user experience. Product teams decide which features to prioritize. Marketing teams craft campaigns. Finance teams design invoicing processes. Customer support teams write response templates. All of these decisions shape how users experience your product or service.

And most of them happen without any input from actual users.

The Problem With How We Share User Research

You do the research. You create the personas. You write the reports. You give the presentations. You even make fancy infographics. And then what happens?

The research sits in a shared drive somewhere, slowly gathering digital dust. The personas get referenced in kickoff meetings and then forgotten. The reports get skimmed once and never opened again.

When a product manager is deciding whether to add a new feature, they probably do not dig through last year’s research repository. When the finance team is redesigning the invoice email, they almost certainly do not consult the user personas. They make their best guess and move on.

This is not a criticism of those teams. They are busy. They have deadlines. And honestly, even if they wanted to consult the research, they probably would not know where to find it or how to interpret it for their specific question.

The knowledge stays locked inside the heads of the UX team, who cannot possibly be present for every decision being made across the organization.

What If Users Could Actually Speak?

What if, instead of creating static documents that people need to find and interpret, we could give stakeholders a way to consult all of your user personas at once?

Imagine a marketing manager working on a new campaign. Instead of trying to remember what the personas said about messaging preferences, they could simply ask: “I’m thinking about leading with a discount offer in this email. What would our users think?”

And the AI, drawing on all your research data and personas, could respond with a consolidated view: how each persona would likely react, where they agree, where they differ, and a set of recommendations based on their collective perspectives. One question, synthesized insight across your entire user base.

This is not science fiction. With AI, we can build exactly this kind of system. We can take all of that scattered research (the surveys, the interviews, the support tickets, the analytics, the personas themselves) and turn it into an interactive resource that anyone can query for multi-perspective feedback.

Building the User Research Repository

The foundation of this approach is a centralized repository of everything you know about your users. Think of it as a single source of truth that AI can access and draw from.

If you have been doing user research for any length of time, you probably have more data than you realize. It is just scattered across different tools and formats:

  • Survey results sitting in your survey platform,
  • Interview transcripts in Google Docs,
  • Customer support tickets in your helpdesk system,
  • Analytics data in various dashboards,
  • Social media mentions and reviews,
  • Old personas from previous projects,
  • Usability test recordings and notes.

The first step is gathering all of this into one place. It does not need to be perfectly organized. AI is remarkably good at making sense of messy inputs.

If you are starting from scratch and do not have much existing research, you can use AI deep research tools to establish a baseline.

These tools can scan the web for discussions about your product category, competitor reviews, and common questions people ask. This gives you something to work with while you build out your primary research.

Creating Interactive Personas

Once you have your repository, the next step is creating personas that the AI can consult on behalf of stakeholders. This builds directly on the functional persona approach I outlined in my previous article, with one key difference: these personas become lenses through which the AI analyzes questions, not just reference documents.

The process works like this:

  1. Feed your research repository to an AI tool.
  2. Ask it to identify distinct user segments based on goals, tasks, and friction points.
  3. Have it generate detailed personas for each segment.
  4. Configure the AI to consult all personas when stakeholders ask questions, providing consolidated feedback.

Here is where this approach diverges significantly from traditional personas. Because the AI is the primary consumer of these persona documents, they do not need to be scannable or fit on a single page. Traditional personas are constrained by human readability: you have to distill everything down to bullet points and key quotes that someone can absorb at a glance. But AI has no such limitation.

This means your personas can be considerably more detailed. You can include lengthy behavioral observations, contradictory data points, and nuanced context that would never survive the editing process for a traditional persona poster. The AI can hold all of this complexity and draw on it when answering questions.

You can also create different lenses or perspectives within each persona, tailored to specific business functions. Your “Weekend Warrior” persona might have a marketing lens (messaging preferences, channel habits, campaign responses), a product lens (feature priorities, usability patterns, upgrade triggers), and a support lens (common questions, frustration points, resolution preferences). When a marketing manager asks a question, the AI draws on the marketing-relevant information. When a product manager asks, it pulls from the product lens. Same persona, different depth depending on who is asking.

The personas should still include all the functional elements we discussed before: goals and tasks, questions and objections, pain points, touchpoints, and service gaps. But now these elements become the basis for how the AI evaluates questions from each persona’s perspective, synthesizing their views into actionable recommendations.

Implementation Options

You can set this up with varying levels of sophistication depending on your resources and needs.

The Simple Approach

Most AI platforms now offer project or workspace features that let you upload reference documents. In ChatGPT, these are called Projects. Claude has a similar feature. Copilot and Gemini call them Spaces or Gems.

To get started, create a dedicated project and upload your key research documents and personas. Then write clear instructions telling the AI to consult all personas when responding to questions. Something like:

You are helping stakeholders understand our users. When asked questions, consult all of the user personas in this project and provide: (1) a brief summary of how each persona would likely respond, (2) an overview highlighting where they agree and where they differ, and (3) recommendations based on their collective perspectives. Draw on all the research documents to inform your analysis. If the research does not fully cover a topic, search social platforms like Reddit, Twitter, and relevant forums to see how people matching these personas discuss similar issues. If you are still unsure about something, say so honestly and suggest what additional research might help.

This approach has some limitations. There are caps on how many files you can upload, so you might need to prioritize your most important research or consolidate your personas into a single comprehensive document.

The More Sophisticated Approach

For larger organizations or more ongoing use, a tool like Notion offers advantages because it can hold your entire research repository and has AI capabilities built in. You can create databases for different types of research, link them together, and then use the AI to query across everything.

The benefit here is that the AI has access to much more context. When a stakeholder asks a question, it can draw on surveys, support tickets, interview transcripts, and analytics data all at once. This makes for richer, more nuanced responses.

What This Does Not Replace

I should be clear about the limitations.

Virtual personas are not a substitute for talking to real users. They are a way to make existing research more accessible and actionable.

There are several scenarios where you still need primary research:

  • When launching something genuinely new that your existing research does not cover;
  • When you need to validate specific designs or prototypes;
  • When your repository data is getting stale;
  • When stakeholders need to hear directly from real humans to build empathy.

In fact, you can configure the AI to recognize these situations. When someone asks a question that goes beyond what the research can answer, the AI can respond with something like: “I do not have enough information to answer that confidently. This might be a good question for a quick user interview or survey.”

And when you do conduct new research, that data feeds back into the repository. The personas evolve over time as your understanding deepens. This is much better than the traditional approach, where personas get created once and then slowly drift out of date.

The Organizational Shift

If this approach catches on in your organization, something interesting happens.

The UX team’s role shifts from being the gatekeepers of user knowledge to being the curators and maintainers of the repository.

Instead of spending time creating reports that may or may not get read, you spend time ensuring the repository stays current and that the AI is configured to give helpful responses.

Research communication changes from push (presentations, reports, emails) to pull (stakeholders asking questions when they need answers). User-centered thinking becomes distributed across the organization rather than concentrated in one team.

This does not make UX researchers less valuable. If anything, it makes them more valuable because their work now has a wider reach and greater impact. But it does change the nature of the work.

Getting Started

If you want to try this approach, start small. If you need a primer on functional personas before diving in, I have written a detailed guide to creating them. Pick one project or team and set up a simple implementation using ChatGPT Projects or a similar tool. Gather whatever research you have (even if it feels incomplete), create one or two personas, and see how stakeholders respond.

Pay attention to what questions they ask. These will tell you where your research has gaps and what additional data would be most valuable.

As you refine the approach, you can expand to more teams and more sophisticated tooling. But the core principle stays the same: take all that scattered user knowledge and give it a voice that anyone in your organization can hear.

In my previous article, I argued that we should move from demographic personas to functional personas that focus on what users are trying to do. Now I am suggesting we take the next step: from static personas to interactive ones that can actually participate in the conversations where decisions get made.

Because every day, across your organization, people are making decisions that affect your users. And your users deserve a seat at the table, even if it is a virtual one.

Further Reading On SmashingMag

  • “A Closer Look At Personas: What They Are And How They Work | 1”, Shlomo Goltz
  • “How To Improve Your Design Process With Data-Based Personas”, Tim Noetzel
  • “How To Make Your UX Research Hard To Ignore”, Vitaly Friedman
  • “How To Build Strong Customer Relationships For User Research”, Renaissance Rachel

How To Measure The Impact Of Features

So we design and ship a shiny new feature. How do we know if it’s working? How do we measure and track its impact? There is no shortage in UX metrics, but what if we wanted to establish a simple, repeatable, meaningful UX metric — specifically for our features? Well, let’s see how to do just that.

I first heard about the TARS framework from Adrian H. Raudschl’s wonderful article on “How To Measure Impact of Features”. Here, Adrian highlighted how his team tracks and decides which features to focus on — and then maps them against each other in a 2×2 quadrants matrix.

It turned out to be a very useful framework to visualize the impact of UX work through the lens of business metrics.

Let’s see how it works.

1. Target Audience (%)

We start by quantifying the target audience by exploring what percentage of a product’s users have the specific problem that a feature aims to solve. We can study existing or similar features that try to solve similar problems, and how many users engage with them.

Target audience isn’t the same as feature usage though. As Adrian noted, if we know that an existing Export Button feature is used by 5% of all users, it doesn’t mean that the target audience is 5%. More users might have the problem that the export feature is trying to solve, but they can’t find it.

Question we ask: “What percentage of all our product’s users have that specific problem that a new feature aims to solve?”

2. A = Adoption (%)

Next, we measure how well we are “acquiring” our target audience. For that, we track how many users actually engage successfully with that feature over a specific period of time.

We don’t focus on CTRs or session duration there, but rather if users meaningfully engage with it. For example, if anything signals that they found it valuable, such as sharing the export URL, the number of exported files, or the usage of filters and settings.

High feature adoption (>60%) suggests that the problem was impactful. Low adoption (<20%) might imply that the problem has simple workarounds that people have relied upon. Changing habits takes time, too, and so low adoption in the beginning is expected.

Sometimes, low feature adoption has nothing to do with the feature itself, but rather where it sits in the UI. Users might never discover it if it’s hidden or if it has a confusing label. It must be obvious enough for people to stumble upon it.

Low adoption doesn’t always equal failure. If a problem only affects 10% of users, hitting 50–75% adoption within that specific niche means the feature is a success.

Question we ask: “What percentage of active target users actually use the feature to solve that problem?”

3. Retention (%)

Next, we study whether a feature is actually used repeatedly. We measure the frequency of use, or specifically, how many users who engaged with the feature actually keep using it over time. Typically, it’s a strong signal for meaningful impact.

If a feature has >50% retention rate (avg.), we can be quite confident that it has a high strategic importance. A 25–35% retention rate signals medium strategic significance, and retention of 10–20% is then low strategic importance.

Question we ask: “Of all the users who meaningfully adopted a feature, how many came back to use it again?”

4. Satisfaction Score (CES)

Finally, we measure the level of satisfaction that users have with that feature that we’ve shipped. We don’t ask everyone — we ask only “retained” users. It helps us spot hidden troubles that might not be reflected in the retention score.

Once users actually used a feature multiple times, we ask them how easy it was to solve a problem after they used that feature — between “much more difficult” and “much easier than expected”. We know how we want to score.

Using TARS For Feature Strategy

Once we start measuring with TARS, we can calculate an S÷T score — the percentage of Satisfied Users ÷ Target Users. It gives us a sense of how well a feature is performing for our intended target audience. Once we do that for every feature, we can map all features across 4 quadrants in a 2×2 matrix.

Overperforming features are worth paying attention to: they have low retention but high satisfaction. It might simply be features that users don’t have to use frequently, but when they do, it’s extremely effective.

Liability features have high retention but low satisfaction, so perhaps we need to work on them to improve them. And then we can also identify core features and project features — and have a conversation with designers, PMs, and engineers on what we should work on next.

Conversion Rate Is Not a UX Metric

TARS doesn’t cover conversion rate, and for a good reason. As Fabian Lenz noted, conversion is often considered to be the ultimate indicator of success — yet in practice it’s always very difficult to present a clear connection between smaller design initiatives and big conversion goals.

The truth is that almost everybody on the team is working towards better conversion. An uptick might be connected to many different initiatives — from sales and marketing to web performance boost to seasonal effects to UX initiatives.

UX can, of course, improve conversion, but it’s not really a UX metric. Often, people simply can’t choose the product they are using. And often a desired business outcome comes out of necessity and struggle, rather than trust and appreciation.

High Conversion Despite Bad UX

As Fabian writes, high conversion rate can happen despite poor UX, because:

  • Strong brand power pulls people in,
  • Aggressive but effective urgency tactics,
  • Prices are extremely attractive,
  • Marketing performs brilliantly,
  • Historical customer loyalty,
  • Users simply have no alternative.

Low Conversion Despite Great UX

At the same time, a low conversion rate can occur despite great UX, because:

  • Offers aren’t relevant to the audience,
  • Users don’t trust the brand,
  • Poor business model or high risk of failure,
  • Marketing doesn’t reach the right audience,
  • External factors (price, timing, competition).

An improved conversion is the positive outcome of UX initiatives. But good UX work typically improves task completion, reduces time on task, minimizes errors, and avoids decision paralysis. And there are plenty of actionable design metrics we could use to track UX and drive sustainable success.

Wrapping Up

Product metrics alone don’t always provide an accurate view of how well a product performs. Sales might perform well, but users might be extremely inefficient and frustrated. Yet the churn is low because users can’t choose the tool they are using.

We need UX metrics to understand and improve user experience. What I love most about TARS is that it’s a neat way to connect customers’ usage and customers’ experience with relevant product metrics. Personally, I would extend TARS with UX-focused metrics and KPIs as well — depending on the needs of the project.

Huge thanks to Adrian H. Raudaschl for putting it together. And if you are interested in metrics, I highly recommend you follow him for practical and useful guides all around just that!

Meet “How To Measure UX And Design Impact”

You can find more details on UX Strategy in 🪴 Measure UX & Design Impact (8h), a practical guide for designers and UX leads to measure and show your UX impact on business. Use the code 🎟 IMPACT to save 20% off today. Jump to the details.

  • Video + UX Training
  • Video only

Video + UX Training

$ 495.00 $ 799.00

Get Video + UX Training

25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00

Get the video course

25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 3 video courses.

Useful Resources

  • “How To Measure UX and Design Impact”, by yours truly
  • “Business Thinking For Designers”, by Ryan Rumsey
  • “ROI of Design Project
  • “How the Right UX Metrics Show Game-Changing Value”, by Jared Spool
  • “Research Sample Size Calculators”

Further Reading

  • “Designing For Stress And Emergency”, Vitaly Friedman
  • “AI In UX: Achieve More With Less”, Paul Boag
  • “The Accessibility Problem With Authentication Methods Like CAPTCHA”, Eleanor Hecks
  • “From Prompt To Partner: Designing Your Custom AI Assistant”, Lyndon Cerejo

Smashing Animations Part 7: Recreating Toon Text With CSS And SVG

After finishing a project that required me to learn everything I could about CSS and SVG animations, I started writing this series about Smashing Animations and “How Classic Cartoons Inspire Modern CSS.” To round off this year, I want to show you how to use modern CSS to create that element that makes Toon Titles so impactful: their typography.

Title Artwork Design

In the silent era of the 1920s and early ’30s, the typography of a film’s title card created a mood, set the scene, and reminded an audience of the type of film they’d paid to see.

Cartoon title cards were also branding, mood, and scene-setting, all rolled into one. In the early years, when major studio budgets were bigger, these title cards were often illustrative and painterly.

But when television boomed during the 1950s, budgets dropped, and cards designed by artists like Lawrence “Art” Goble adopted a new visual language, becoming more graphic, stylised, and less intricate.

Note: Lawrence “Art” Goble is one of the often overlooked heroes of mid-century American animation. He primarily worked for Hanna-Barbera during its most influential years of the 1950s and 1960s.

Goble wasn’t a character animator. His role was to create atmosphere, so he designed environments for The Flintstones, Huckleberry Hound, Quick Draw McGraw, and Yogi Bear, as well as the opening title cards that set the tone. His title cards, featuring paintings with a logo overlaid, helped define the iconic look of Hanna-Barbera.

Goble’s artwork for characters such as Quick Draw McGraw and Yogi Bear was effective on smaller TV screens. Rather than reproducing a still from the cartoon, he focused on presenting a single, strong idea — often in silhouette — that captured its essence. In “The Buzzin’ Bear,” Yogi buzzes by in a helicopter. He bounces away, pic-a-nic basket in hand, in “Bear on a Picnic,” and for his “Prize Fight Fright,” Yogi boxes the title text.

With little or no motion to rely on, Goble’s single frames had to create a mood, set the scene, and describe a story. They did this using flat colours, graphic shapes, and typography that was frequently integrated into the artwork.

As designers who work on the web, toon titles can teach us plenty about how to convey a brand’s personality, make a first impression, and set expectations for someone’s experience using a product or website. We can learn from the artists’ techniques to create effective banners, landing-page headers, and even good ol’ fashioned splash screens.

Toon Title Typography

Cartoon title cards show how merging type with imagery delivers the punch a header or hero needs. With a handful of text-shadow, text-stroke, and transform tricks, modern CSS lets you tap into that same energy.

The Toon Text Title Generator

Partway through writing this article, I realised it would be useful to have a tool for generating text styled like the cartoon titles I love so much. So I made one.

My Toon Text Title Generator lets you experiment with colours, strokes, and multiple text shadows. You can adjust paint order, apply letter spacing, preview your text in a selection of sample fonts, and then copy the generated CSS straight to your clipboard to use in a project.

Toon Title CSS

You can simply copy-paste the CSS that the Toon Text Title Generator provides you. But let’s look closer at what it does.

Text shadow

Look at the type in this title from Augie Doggie’s episode “Yuk-Yuk Duck,” with its pale yellow letters and dark, hard, offset shadow that lifts it off the background and creates the illusion of depth.

You probably already know that text-shadow accepts four values: (1) horizontal and (2) vertical offsets, (3) blur, and (4) a colour which can be solid or semi-transparent. Those offset values can be positive or negative, so I can replicate “Yuk-Yuk Duck” using a hard shadow pulled down and to the right:

color: #f7f76d;
text-shadow: 5px 5px 0 #1e1904;

On the other hand, this “Pint Giant” title has a different feel with its negative semi-soft shadow:

color: #c2a872;
text-shadow:
  -7px 5px 0 #b100e,
  0 -5px 10px #546c6f;

To add extra depth and create more interesting effects, I can layer multiple shadows. For “Let’s Duck Out,” I combine four shadows: the first a solid shadow with a negative horizontal offset to lift the text off the background, followed by progressively softer shadows to create a blur around it:

color: #6F4D80;
text-shadow:
  -5px 5px 0 #260e1e, /* Shadow 1 */
  0 0 15px #e9ce96,   /* Shadow 2 */
  0 0 30px #e9ce96,   /* Shadow 3 */
  0 0 30px #e9ce96;   /* Shadow 4 */

These shadows show that using text-shadow isn’t just about creating lighting effects, as they can also be decorative and add personality.

Text Stroke

Many cartoon title cards feature letters with a bold outline that makes them stand out from the background. I can recreate this effect using text-stroke. For a long time, this property was only available via a -webkit- prefix, but that also means it’s now supported across modern browsers.

text-stroke is a shorthand for two properties. The first, text-stroke-width, draws a contour around individual letters, while the second, text-stroke-color, controls its colour. For “Whatever Goes Pup,” I added a 4px blue stroke to the yellow text:

color: #eff0cd;
-webkit-text-stroke: 4px #7890b5;
text-stroke: 4px #7890b5;

Strokes can be especially useful when they’re combined with shadows, so for “Growing, Growing, Gone,” I added a thin 3px stroke to a barely blurred 1px shadow to create this three-dimensional text effect:

color: #fbb999;
text-shadow: 3px 5px 1px #5160b1;
-webkit-text-stroke: 3px #984336;
text-stroke: 3px #984336;

Paint Order

Using text-stroke doesn’t always produce the expected result, especially with thinner letters and thicker strokes, because by default the browser draws a stroke over the fill. Sadly, CSS still does not permit me to adjust stroke placement as I often do in Sketch. However, the paint-order property has values that allow me to place the stroke behind, rather than in front of, the fill.

paint-order: stroke paints the stroke first, then the fill, whereas paint-order: fill does the opposite:

color: #fbb999;
paint-order: fill;
text-shadow: 3px 5px 1px #5160b1;
text-stroke-color:#984336;
text-stroke-width: 3px;

An effective stroke keeps letters readable, adds weight, and — when combined with shadows and paint order — gives flat text real presence.

Backgrounds Inside Text

Many cartoon title cards go beyond flat colour by adding texture, gradients, or illustrated detail to the lettering. Sometimes that’s a texture, other times it might be a gradient with a subtle tonal shift. On the web, I can recreate this effect by using a background image or gradient behind the text, and then clipping it to the shape of the letters. This relies on two properties working together: background-clip: text and text-fill-color: transparent.

First, I apply a background behind the text. This can be a bitmap or vector image or a CSS gradient. For this example from the Quick Draw McGraw episode “Baba Bait,” the title text includes a subtle top–bottom gradient from dark to light:

background: linear-gradient(0deg, #667b6a, #1d271a);

Next, I clip that background to the glyphs and make the text transparent so the background shows through:

-webkit-background-clip: text;
-webkit-text-fill-color: transparent;

With just those two lines, the background is no longer painted behind the text; instead, it’s painted within it. This technique works especially well when combined with strokes and shadows. A clipped gradient provides the lettering with colour and texture, a stroke keeps its edges sharp, and a shadow elevates it from the background. Together, they recreate the layered look of hand-painted title cards using nothing more than a little CSS. As always, test clipped text carefully, as browser quirks can sometimes affect shadows and rendering.

Splitting Text Into Individual Characters

Sometimes I don’t want to style a whole word or heading. I want to style individual letters — to nudge a character into place, give one glyph extra weight, or animate a few letters independently.

In plain HTML and CSS, there’s only one reliable way to do that: wrap each character in its own span element. I could do that manually, but that would be fragile, hard to maintain, and would quickly fall apart when copy changes. Instead, when I need per-letter control, I use a text-splitting library like splt.js (although other solutions are available). This takes a text node and automatically wraps words or characters, giving me extra hooks to animate and style without messing up my markup.

It’s an approach that keeps my HTML readable and semantic, while giving me the fine-grained control I need to recreate the uneven, characterful typography you see in classic cartoon title cards. However, this approach comes with accessibility caveats, as most screen readers read text nodes in order. So this:

<h2>Hum Sweet Hum</h2>

…reads as you’d expect:

Hum Sweet Hum

But this:

<h2>
<span>H</span>
<span>u</span>
<span>m</span>
<!-- etc. -->
</h2>

…can be interpreted differently depending on the browser and screen reader. Some will concatenate the letters and read the words correctly. Others may pause between letters, which in a worst-case scenario might sound like:

“H…” “U…” “M…”

Sadly, some splitting solutions don’t deliver an always accessible result, so I’ve written my own text splitter, splinter.js, which is currently in beta.

Transforming Individual Letters

To activate my Toon Text Splitter, I add a data- attribute to the element I want to split:

<h2 data-split="toon">Hum Sweet Hum</h2>

First, my script separates each word into individual letters and wraps them in a span element with class and ARIA attributes applied:

<span class="toon-char" aria-hidden="true">H</span>
<span class="toon-char" aria-hidden="true">u</span>
<span class="toon-char" aria-hidden="true">m</span>

The script then takes the initial content of the split element and adds it as an aria attribute to help maintain accessibility:

<h2 data-split="toon" aria-label="Hum Sweet Hum">
  <span class="toon-char" aria-hidden="true">H</span>
  <span class="toon-char" aria-hidden="true">u</span>
  <span class="toon-char" aria-hidden="true">m</span>
</h2>

With those class attributes applied, I can then style individual characters as I choose.

For example, for “Hum Sweet Hum,” I want to replicate how its letters shift away from the baseline. After using my Toon Text Splitter, I applied four different translate values using several :nth-child selectors to create a semi-random look:

/* 4th, 8th, 12th... */
.toon-char:nth-child(4n) { translate: 0 -8px; }
/* 1st, 5th, 9th... */
.toon-char:nth-child(4n+1) { translate: 0 -4px; }
/* 2nd, 6th, 10th... */
.toon-char:nth-child(4n+2) { translate: 0 4px; }
/* 3rd, 7th, 11th... */
.toon-char:nth-child(4n+3) { translate: 0 8px; }

But translate is only one property I can use to transform my toon text.

I could also rotate those individual characters for an even more chaotic look:

/* 4th, 8th, 12th... */
.toon-line .toon-char:nth-child(4n) { rotate: -4deg; }
/* 1st, 5th, 9th... */
.toon-char:nth-child(4n+1) { rotate: -8deg; }
/* 2nd, 6th, 10th... */
.toon-char:nth-child(4n+2) { rotate: 4deg; }
/* 3rd, 7th, 11th... */
.toon-char:nth-child(4n+3) { rotate: 8deg; }

But translate is only one property I can use to transform my toon text. I could also rotate those individual characters for an even more chaotic look:

/* 4th, 8th, 12th... */
.toon-line .toon-char:nth-child(4n) {
rotate: -4deg; }

/* 1st, 5th, 9th... */
.toon-char:nth-child(4n+1) {
rotate: -8deg; }

/* 2nd, 6th, 10th... */
.toon-char:nth-child(4n+2) {
rotate: 4deg; }

/* 3rd, 7th, 11th... */
.toon-char:nth-child(4n+3) {
rotate: 8deg; }

And, of course, I could add animations to jiggle those characters and bring my toon text style titles to life. First, I created a keyframe animation that rotates the characters:

@keyframes jiggle {
0%, 100% { transform: rotate(var(--base-rotate, 0deg)); }
25% { transform: rotate(calc(var(--base-rotate, 0deg) + 3deg)); }
50% { transform: rotate(calc(var(--base-rotate, 0deg) - 2deg)); }
75% { transform: rotate(calc(var(--base-rotate, 0deg) + 1deg)); }
}

Before applying it to the span elements created by my Toon Text Splitter:

.toon-char {
animation: jiggle 3s infinite ease-in-out;
transform-origin: center bottom; }

And finally, setting the rotation amount and a delay before each character begins to jiggle:

.toon-char:nth-child(4n) { --base-rotate: -2deg; }
.toon-char:nth-child(4n+1) { --base-rotate: -4deg; }
.toon-char:nth-child(4n+2) { --base-rotate: 2deg; }
.toon-char:nth-child(4n+3) { --base-rotate: 4deg; }

.toon-char:nth-child(4n) { animation-delay: 0.1s; }
.toon-char:nth-child(4n+1) { animation-delay: 0.3s; }
.toon-char:nth-child(4n+2) { animation-delay: 0.5s; }
.toon-char:nth-child(4n+3) { animation-delay: 0.7s; }

One Frame To Make An Impression

Cartoon title artists had one frame to make an impression, and their typography was as important as the artwork they painted. The same is true on the web.

A well-designed header or hero area needs clarity, character, and confidence — not simply a faded full-width background image.

With a few carefully chosen CSS properties — shadows, strokes, clipped backgrounds, and some restrained animation — we can recreate that same impact. I love toon text not because I’m nostalgic, but because its design is intentional. Make deliberate choices, and let a little toon text typography add punch to your designs.

I Built a Privacy-First CSV Explorer That Works Entirely in Your Browser

Why I Built This

CSV files are everywhere — exports from databases, APIs, analytics tools, surveys, and reports.

But working with them is still painful:

Excel crashes on large files

Online tools require uploads (privacy risk)

Many tools feel heavy for quick exploration

I wanted a tool that:

Works instantly

Runs fully in the browser

Never uploads data

Feels simple but powerful

So I built CSV Explorer — a free, web-based CSV analysis tool.

👉 Live app:
https://innoadeq.github.io/csv-explorer/

What CSV Explorer Does

CSV Explorer lets you upload, explore, filter, and visualize CSV files directly in your browser.

No signup.
No backend.
No data uploads.

Everything runs locally.

Core Features
📂 Smart CSV Upload & Parsing

Drag-and-drop CSV upload

Automatic header detection

Smart data type inference (string, number, date, boolean)

Handles large CSV files efficiently

🧠 Intelligent Column Management

Automatically detects all columns

Search columns instantly

Select/deselect columns to focus your analysis

Collapsible column selector for mobile users

Avoids overwhelming users with too many fields

🔍 Advanced Data Filtering

Filters adapt based on column type:

Text: partial matching

Numbers: ranges, comparisons, exact values

Dates: from/to date selection

Booleans: dropdown filter

All filtering happens in real time.

📄 Smart Pagination

Configurable rows per page (10, 25, 50, 100)

First / Previous / Next / Last navigation

Clear record count and page info

Mobile-friendly controls

📊 Data Visualization

Bar charts: sum, average, count, min, max

Pie charts: distribution and proportions

Flexible column selection

Built using Chart.js

Perfect for quick insights without exporting again.

🔒 Privacy & Security (Core Principle)

100% client-side processing

Files never leave your browser

No tracking, no analytics, no storage

Works offline once loaded

This makes it safe for sensitive or internal data.

🎯 Who Is This For?

Data analysts exploring datasets

Business users analyzing reports

Researchers working with surveys

Students learning data analysis

Developers inspecting CSV exports

Anyone tired of Excel limitations 😄

🚀 Try It Out

🔗 CSV Explorer:
https://innoadeq.github.io/csv-explorer/

I’d love feedback, feature requests, or suggestions from the community.