Getting Started With The Popover API

Tooltips feel like the smallest UI problem you can have. They’re tiny and usually hidden. When someone asks how to build one, the traditional answer almost always comes back using some JavaScript library. And for a long time, that was the sensible advice.

I followed it, too.

On the surface, a tooltip is simple. Hover or focus on an element, show a little box with some text, then hide it when the user moves away. But once you ship one to real users, the edges start to show. Keyboard users Tab into the trigger, but never see the tooltip. Screen readers announce it twice, or not at all. The tooltip flickers when you move the mouse too quickly. It overlaps content on smaller screens. Pressing Esc does not close it. Focus gets lost.

Over time, my tooltip code grew into something I didn’t really want to own anymore. Event listeners piled up. Hover and focus had to be handled separately. Outside clicks needed special cases. ARIA attributes had to be kept in sync by hand. Every small fix added another layer of logic.

Libraries helped, but they were also more like black boxes I worked around instead of fully understanding what was happening behind the scenes.

That was what pushed me to look at the newer Popover API. I wanted to see what would happen if I rebuilt a single tooltip using the browser’s native model without the aid of a library.

As we start, it’s worth noting that, as with any new feature, there are some things with it that are still being ironed out. That said, it currently enjoys great browser support, although there are several pieces to the overall API that are in flux. It’s worth keeping an eye on Caniuse in the meantime.

The “Old” Tooltip

Before the Popover API, using a tooltip library was not a shortcut. It was the default. Browsers didn’t have a native concept of a tooltip that worked across mouse, keyboard, and assistive technology. If you cared about correctness, your only option was to use a library, and that is exactly what I did.

At a high level, the pattern was always the same: a trigger element, a hidden tooltip element, and JavaScript to coordinate the two.

<button class="info">?</button>
<div class="tooltip" role="tooltip">Helpful text</div>

The library handled the wiring that allowed the element to show on hover or focus, hide on blur or mouse leave, and reposition/resize on scroll.

Over time, the tooltip could become fragile. Small changes carried risk. Minor fixes caused regressions. Worse, adding new tooltips inherited the same complexity. Things technically worked, but never felt settled or complete.

That was the state of things when I decided to rebuild the tooltip using the browser’s native Popover API.

The Moment I Tried The Popover API

I didn’t switch to using the Popover API because I wanted to experiment with something new. I switched because I was tired of maintaining tooltip behavior that I believed the browser should have already understood.

I was skeptical at first. Most new web APIs promise simplicity, but still require glue, edge-case handling, or fallback logic that quietly recreates the same complexity that you were trying to escape.

So, I tried the Popover API in the smallest way possible. Here’s what that looked like:

<!-- popovertarget creates the connection to id="tip-1" -->
<button popovertarget="tip-1">?</button>

<!-- popover="manual": browser manages this as a popover -->
<!-- role="tooltip": tells assistive technology what this is -->
<div id="tip-1" popover="manual" role="tooltip">
  This button triggers a helpful tip.
</div>

1. The Keyboard “Just Works”

Keyboard support depended on multiple layers lining up correctly: focus had to trigger the tooltip, blur had to hide it, Esc had to be wired manually, and timing mattered. If you missed one edge case, the tooltip would either stay open too long or disappear before it could be read.

With the popover attribute set to auto or manual, the browser takes over the basics: Tab and Shift+Tab behave normally, Esc closes the tooltip every time, and no extra listeners are required.

<div popover="manual">
  Helpful explanation
</div>

What disappeared from my codebase were global keydown handlers, Esc-specific cleanup logic, and state checks during keyboard navigation. The keyboard experience stopped being something I had to maintain, and it became a browser guarantee.

2. Screenreader Predictability

This was the biggest improvement. Even with careful ARIA work, the behavior varied, as I outlined earlier. Every small change felt risky. Using a popover with a proper role looks and feels a lot more stable and predictable as far as what’s going to happen:

<div popover="manual" role="tooltip">
  Helpful explanation
</div>

And here’s another win: After the switch, Lighthouse stopped flagging incorrect ARIA state warnings for the interaction, largely because there are no longer custom ARIA states for me to accidentally get wrong.

3. Focus Management

Focus used to be fragile. Before, I had rules like: let focus trigger show tooltip, move focus into tooltip and don’t close, blur trigger when it’s too close, and close tooltip and restore focus manually. This worked until it didn’t.

With the Popover API, the browser enforces a simpler model where focus can more naturally move into the popover. Closing the popover returns focus to the trigger, and there are no invisible focus traps or lost focus moments. And I didn’t add focus restoration code; I removed it.

Conclusion

The Popover API means that tooltips are no longer something you simulate. They’re something the browser understands. Opening, closing, keyboard behavior, Escape handling, and a big chunk of accessibility now come from the platform itself, not from ad-hoc JavaScript.

That does not mean tooltip libraries are obsolete because they still make sense for complex design systems, heavy customization, or legacy constraints, but the default has shifted. For the first time, the simplest tooltip can also be the most correct one. If you are curious, try this experiment: Simply replace just one tooltip in your product with the Popover API, do not rewrite everything, do not migrate a whole system, and just pick one and see what disappears from your code.

When the platform gives you a better primitive, the win is not just fewer lines of JavaScript, but it is fewer things you have to worry about at all.

Check out the full source code in my GitHub repo.

Further Reading

For deeper dives into popovers and related APIs:

  • “Poppin’ In”, Geoff Graham
  • “Clarifying the Relationship Between Popovers and Dialogs”, Zell Liew
  • “What is popover=hint?”, Una Kravets
  • “Invoker Commands”, Daniel Schwarz
  • “Creating an Auto-Closing Notification with an HTML Popover”, Preethi
  • Open UI Popover API Explainer
  • “Pop(over) the Balloons”, John Rhea
  • “CSS Anchor Positioning”, Juan Diego Rodríguez

MDN also offers comprehensive technical documentation for the Popover API.

How I Built an AI RAG App MVP with Lovable (Step-by-Step)

Building AI products doesn’t have to take months.

In this article, I’ll walk you through how I built an AI-powered Tax Assistant MVP using Lovable — including the architecture decisions, knowledge base setup, and guardrails that made it production-ready.

This isn’t just about tools. It’s about product thinking.

Watch the full build here:
https://youtu.be/RYlnbu2jTjI?si=OXqbXqPk4p11SoXh

The Problem

Tax laws are complex, dense, and often buried inside long PDF documents. Small business owners and freelancers struggle to:

  • Find accurate information quickly
  • Understand compliance requirements
  • Interpret legal language

A simple chatbot isn’t enough. It needs context-aware answers grounded in official sources.

That’s where RAG comes in.

What Is a RAG App?

RAG (Retrieval-Augmented Generation) combines:

  1. A knowledge base (structured documents)
  2. A retrieval system (searches relevant sections)
  3. An LLM (generates contextual answers)

Instead of generating answers from memory, the AI retrieves relevant documents first, then responds.

This reduces hallucination and improves accuracy.

Defining the MVP Scope

Before touching Lovable, I defined the boundaries.

Included in V1:

  • Ask tax-related questions
  • AI grounded in a structured knowledge base
  • Consultant listing
  • Clear disclaimer (educational use only)
  • Clean minimal UI
  • User accounts

Not Included:

  • Payment processing
  • Advanced compliance workflows
  • Legal advisory functionality

A tight scope keeps the MVP focused and realistic.

Step 1: Designing the Core Workflow

The architecture is simple:

User → AI → Knowledge Base → Contextual Response

The key decision here was to avoid raw prompting.
Everything needed to be grounded in documented tax regulations.

This is what separates a real AI product from a basic chatbot wrapper.

Step 2: Preparing the Knowledge Base

I sourced official tax documents and structured them into logical sections.

Why structure matters:

  • Smaller chunks improve retrieval precision
  • Categorization improves answer relevance
  • Clean formatting reduces hallucination

Most AI apps fail here. Garbage input equals weak output.

Step 3: Setting Up AI Guardrails in Lovable

This was critical.

I defined:

  • System instructions (educational tone, clarity)
  • Boundaries (no legal guarantees)
  • Refusal behavior when uncertain
  • Encouragement to consult professionals for complex cases

Guardrails are not optional when building AI tools in regulated domains.

Step 4: Adding Human Consultants

AI handles general questions.

Humans handle edge cases.

This hybrid model makes the product more credible and scalable long-term.

It also opens monetization pathways later.

Step 5: Disclaimers and Risk Positioning

The app clearly states it is for educational purposes only.

When building AI tools that deal with finance, health, or law, clarity protects both users and builders.

Never skip this step.

Lessons Learned

Here’s what I’d improve in V2:

  • Better document chunking strategy
  • Conversation history
  • Paid consultation booking
  • Industry-specific tax flows
  • Analytics dashboard

Building V1 isn’t about perfection. It’s about validation.

Final Thoughts

AI tools are becoming easier to build.

But strong AI products still require:

  • Clear problem definition
  • Focused MVP scope
  • Thoughtful knowledge base design
  • Guardrails
  • Long-term product thinking

If you’re building AI SaaS or experimenting with RAG apps, I hope this breakdown helps.

Full walkthrough here:

Why Our Next.js 15 App Lost 80% of Its Traffic Overnight (And How We Fixed It) 📉

📉 My Traffic Dropped to 0 overnight: The Next.js 15 Hydration Trap
Imagine waking up, checking your Google Analytics 4 (GA4) dashboard for your shiny new SaaS product, and seeing a horrifying number: 0 Users. 0 Views. 100% Drop.

Did the servers crash? Did Google de-index my domain?

Neither. The site was running perfectly fine. The culprit? A sneaky Hydration Mismatch in Next.js 15 that silently murdered my tracking script.

Here is how a seemingly innocent<GoogleAnalytics /> component placement caused a complete tracking blackout on sandagent.dev, and how you can avoid this exact trap.

🕵️ The Crime Scene

Like any good Next.js developer, I wanted to add Google Analytics to my app/layout.tsx. Standard procedure, right? I used a third-party GA package (or standard next/third-parties/google) and placed it right where it belongs—in the <head> tag.

// ❌ The Deadly Mistake
export default function RootLayout({ children }: { children: React.ReactNode }) {
return (
<html lang="en">
<head>
{/* Looks perfectly normal, doesn't it? */}
<GoogleAnalytics gaId="G-XXXXXXXXXX" />
</head>
<body>
{children}
</body>
</html>
);
}

🔍 The Investigation: Why it broke

In Next.js 15 (with React 19), the hydration process has become incredibly strict.

When you place dynamic script components inside the <head>, the server renders the HTML with the injected <script> tags. However, during the client-side hydration phase, third-party browser extensions, or even React’s own strict <head> reconciliation, can cause a mismatch.

Instead of just throwing a red warning in your console and moving on, the hydration failure caused React to effectively drop or bypass the execution of the GA tracking scripts in the client-side DOM tree.
The result? The page visually loads perfectly, the user clicks around, but the collect?v=2 network request is never sent to Google. Complete data blackout.

🛠️ The Fix (The 1-Line Solution)

After digging through the Next.js docs and debugging the React tree, the fix was almost embarrassingly simple.

Do not put the GA component in the <head>. Put it inside the <body>.

// ✅ The Correct Way
export default function RootLayout({ children }: { children: React.ReactNode }) {
return (
<html lang="en">
<head>
{/* Leave standard meta tags here */}
</head>
<body>
{children}

{/* Place it here instead! */}
<GoogleAnalytics gaId="G-XXXXXXXXXX" />
</body>
</html>
);
}

Why does this work?

By placing the script tag inside the <body> (or at the very end of it), it avoids conflicting with React’s strict <head> management during the initial render pass. The script still loads asynchronously, performance isn’t impacted, but most importantly: React hydration no longer swallows your tracking code.

💡 Takeaways for Next.js 15 Developers

  1. Don’t trust the visual load: Just because your site didn’t 500 error doesn’t mean your background scripts are running. Check your Network tab for collect requests after a major Next.js version bump.
  2. Move scripts to <body>: Unless strictly required by the provider to be the first thing in the <head>, placing analytics components inside the body tag is much safer against React 19 hydration mismatches.
  3. Set up traffic anomaly alerts: If I hadn’t had an automated cron job fetching daily GA reports, I might have gone weeks without realizing my traffic was zeroed out.

Have you run into weird React 19 / Next.js 15 hydration bugs yet? Let me know in the comments!

(P.S. If you’re building AI agents, you can check out the project that almost lost all its metrics at sandagent.dev 🚀)

*

Mobile App Onboarding Explained: The Key to Activation and Retention

Most users don’t uninstall your app because something breaks. They uninstall because something doesn’t make sense.

They install the app with curiosity. The screenshots looked promising. The problem it solves feels relevant. There is intent. But when they open it for the first time, that intent meets uncertainty.

The screen is unfamiliar, the next step is not obvious but the value is not immediate.

So they hesitate.

That hesitation is not dramatic. There is no error message, no visible failure. But something important happens in that moment. The user begins to question whether the app is worth the effort required to understand it.

This is where Mobile app onboarding quietly decides the outcome.

Onboarding is often misunderstood as a set of introduction screens or a signup flow. But its real purpose is much deeper. It exists to help users move from curiosity to confidence.

When users open an app, they are not trying to learn everything. They are trying to answer a much simpler question: What should I do first, and will it be worth it?

If the product answers that question quickly, users move forward. If it doesn’t, users slow down. And when users slow down, doubt begins to grow.

This is why the first meaningful action matters so much.

In every successful product, there is a moment when the value becomes real. Sending the first message in a chat app. Creating the first task in a productivity tool. Completing the first transaction in a fintech app. This is the moment when the product stops being an interface and starts becoming useful.

This moment is known as activation.

Before activation, users are evaluating. After activation, users are engaging.

Before activation, users are evaluating. After activation, users are engaging.

Good onboarding exists to guide users toward that moment as quickly and clearly as possible. It removes ambiguity. It provides direction. It makes the path forward visible.

Poor onboarding does the opposite. It asks for effort before delivering value. It presents empty screens without guidance. It forces users to figure things out on their own.

Every extra second of confusion increases the likelihood that the user will leave.

The most effective apps understand this deeply. They do not try to explain everything upfront. Instead, they focus on helping users experience progress early. They make the first success easy to achieve.

Because once users experience value, their mindset changes. The app no longer feels like something to evaluate. It becomes something to use.

Retention does not begin after days or weeks. It begins in the first few minutes.

Retention does not begin after days or weeks. It begins in the first few minutes.

That first experience shapes trust. It shapes confidence. It shapes whether the product becomes part of the user’s routine or disappears before it ever had the chance.

Onboarding is not just the beginning of the product journey.

It is the moment that decides whether the journey continues.

👇 Read the full breakdown Mobile App Onboarding: The First 5 Minutes That Decide Retention

Learn n8n Automation in 1 Hour – Complete Beginner Guide

Automation is one of those skills that instantly upgrades how you work.

Instead of copying data between tools, sending repetitive Slack messages, or manually updating spreadsheets, you build a workflow once and let it run.

In this guide, I’ll show you how to get started with n8n and build your first real automation step by step.

If you’re a beginner, this is the right place to start.

What is n8n?

n8n is an open-source workflow automation tool that allows you to connect apps, APIs, and services.

At its core, every workflow follows a simple pattern:

Trigger → Process → Action

You define:

  • What starts the workflow
  • How the data should be transformed
  • What should happen next

And n8n executes it automatically.

What You’ll Learn in This Tutorial

In the full video, we build a practical workflow while covering the essential nodes you actually need to understand.

1️⃣ Triggers – How Workflows Start

Every automation begins with a trigger.

We cover:

  • Form submission trigger
  • Schedule trigger (run workflows automatically at specific times)

Once you understand triggers, you understand how automation begins.

2️⃣ Editing & Transforming Data

Incoming data is rarely clean or structured the way you want.

Using the Edit Fields node, you’ll learn how to:

  • Rename fields
  • Format values
  • Clean up incoming data

This is where automation becomes structured and reliable.

3️⃣ Conditional Logic (If/Else Workflows)

Real-world automation requires decision-making.

With conditional nodes, you can:

  • Run actions only if certain conditions are met
  • Create multiple branches inside a single workflow

This transforms simple workflows into intelligent systems.

4️⃣ Looping Through Items

When your workflow receives multiple items, you need to process each one correctly.

You’ll learn how to:

  • Loop through data
  • Handle multiple records properly

This is especially useful when working with lists or API responses.

5️⃣ Connecting External Tools

Automation becomes powerful when it interacts with real tools.

In this tutorial, we integrate:

  • Slack for notifications
  • Google Sheets for storing structured data

This is where workflows start producing real outcomes.

6️⃣ Error Handling

Things break.

APIs fail. Credentials expire. Data formats change.

Instead of letting workflows fail silently, you’ll learn how to:

  • Catch errors
  • Handle failures properly
  • Build more reliable automations

This is what separates experiments from production-ready systems.

7️⃣ HTTP Request Node (Unlocking APIs)

The HTTP Request node allows you to:

  • Connect to any API
  • Send or retrieve data
  • Build custom integrations

Once you understand this node, you’re no longer limited to built-in integrations.

Who Is This For?

This tutorial is perfect for:

  • Developers exploring automation
  • Founders building internal tools
  • Freelancers automating client workflows
  • Anyone tired of repetitive manual tasks

No advanced knowledge required.

🎥 Watch the Full Tutorial

If you want to follow along and build the workflow step by step:

Automation gives you leverage.

Start small. Build simple workflows. Add logic. Connect APIs.

That’s how you move from manual work to scalable systems.

YouTrack 2026 Roadmap

Now that we’re in 2026, we want to take a moment to reflect on the past year and share what’s ahead. YouTrack continues to grow – we’re seeing more teams than ever making the switch, and we’re deeply grateful for the trust you place in us. Together with our consulting partners, we kicked off a number of migration projects with Fortune 500 customers last year, and even more are lined up for 2026. Your feedback has been the driving force behind the decisions we’ve made, and it continues to shape our roadmap.

This year, our focus is on making YouTrack even more powerful for large, scaling teams – here’s what we’re building.

Our commitment to YouTrack Server and Cloud

Giving you the flexibility to choose the hosting option that best fits your organization and data governance needs has always been our priority – and we intend to keep it that way for years to come. While Atlassian has been pushing Jira and Confluence customers toward cloud-only hosting and ending server support altogether, we’re going in the opposite direction. Technically, we maintain one codebase for both – every feature we build is available to Server and Cloud alike.

Continuous YouTrack Server support

We feel a deep sense of responsibility toward customers who choose YouTrack Server, especially those embarking on large-scale, long-term migration projects. Server hosting isn’t a legacy choice – it’s a valid long-term strategy, and it deserves our full support. Whether you are a growing team or a large enterprise, YouTrack Server will remain available to teams of any size. Behind the scenes, we’re also investing in our database architecture to ensure both hosting options continue to receive equal support and performance at scale.

New European data center for YouTrack Cloud

For YouTrack Cloud customers, we’re providing more flexibility and compliance with European data residency requirements. We are adding Frankfurt, Germany, as a new European data center location hosted by Amazon Web Services (AWS). Starting this February, new YouTrack Cloud instances can be configured with Germany as the selected location. For existing instances, changing the location where your data is stored requires the YouTrack Support team to migrate your instance from one data center to another. If you want to move your instance to a different data center, submit a request to YouTrack Support.

What’s ahead in 2026

Our 2026 roadmap is shaped by the people who use YouTrack every day. For teams migrating from tools like Jira and Confluence, we’re investing in smoother imports, expanding opportunities for app developers, and building the infrastructure to support large-scale transitions. We’re also making the Knowledge Base more collaborative for migrating teams bringing their documentation with them, and for everyone already using YouTrack. 

For project managers, this is a big year – we’re introducing Whiteboards as a major new way to plan visually, and we plan to enhance Gantt charts and Agile Boards. 

For B2B support teams, we’re expanding YouTrack Helpdesk with client organizations, enabling them to provide tailored experiences for different clients.

We’re also working to integrate AI assistance more deeply into every user’s daily workflow.

For growing and enterprise teams migrating to YouTrack

We work hands-on with large organizations through every step of their migration journey. What we learn from these projects directly shapes our roadmap – and we deliberately keep it flexible so we can quickly address challenges as they arise.

Enhancing the migration experience

We are working to make the migration from other project management tools to YouTrack easier and faster. We are continuously improving our existing import options, with a stronger focus on Jira and Confluence. You can also expect more built-in import options, including a new ClickUp migration wizard.

Creating new opportunities for app developers

Our consulting partners play a key role in supporting enterprise customers through migrations and project customization.

We’re investing in expanding opportunities for app developers – both for partners building solutions for their clients and for independent developers contributing to the JetBrains Marketplace app collection. You can now publish paid apps and build custom apps for AI-powered tools using YouTrack’s remote MCP Server. If you’re interested in working with a partner, contact us and we’ll connect you with the right team.

Improving YouTrack performance

We continue to work on migrating YouTrack to a new database that supports a multi-node environment, improving overall performance and ensuring the scalability and reliability required for large instances.

Сollaborative editing in Knowledge Base

For teams migrating from Confluence, a powerful Knowledge Base is essential. We’re working on collaborative article editing – and you’ll find the full details in the “For everyone” section below.

For Project Managers

Introducing YouTrack Whiteboards

Planning a project often starts with ideas rather than tasks. With YouTrack Whiteboards, we’re introducing a new, flexible way for teams to collaborate from a bird’s-eye view – whether you’re starting from scratch or building on existing project content.

You’ll be able to brainstorm and visualize ideas on a shared whiteboard-style interface in real time. Once notes are ready to be turned into actions, they can be transformed into tasks and articles with preconfigured dependencies, fully integrated into your YouTrack project’s context.

Gantt charts and Agile Boards

We plan to enhance Gantt charts with an improved UI for easier timeline viewing and better performance when making changes. Agile Boards will gain the ability to embed task lists, making it faster to navigate to important tasks directly from the board.

For B2B support teams

Helpdesk with client organizations

We’re expanding YouTrack Helpdesk to better support B2B scenarios. Our next step is introducing client organizations, enabling support teams to provide tailored experiences for different clients. You’ll be able to manage tickets from multiple client organizations at once, while keeping requests clearly separated. All tickets related to a client organization will be visible to its members by default, giving each client full visibility into the status and progress of their requests.

For everyone

Collaborative editing in your Knowledge Base

Collaborative article editing has been one of the most requested features in YouTrack – and one that’s technically challenging to build. We’re looking forward to finally making it happen. You’ll be able to work on content together with your team members in real time, see who else is editing, and make changes simultaneously. We also plan to improve the experience of working with tables by introducing more convenient editing options.

AI assistance

These days, it’s hard to imagine our day-to-day workflows without AI. We’ll continue enhancing the free AI assistance available out of the box in YouTrack, with a focus on contextual capabilities seamlessly integrated into your workflow. Here’s a preview of what we plan to make available soon:

  • An AI-powered chat that supports you in all aspects of your work. Ask YouTrack via text or voice commands to update tasks, search through articles, manage planning on Whiteboards, and perform many other project-wide actions.
  • Creating tasks from Slack will become easier with semantic AI, letting you generate task drafts from conversation context with team members or clients.
  • A new My Work page will provide a personalized view of tasks and articles, organized into widgets to help every user stay focused on what matters most.

Agent-based automation

Delivering AI automation remains our focus. We’ll continue to make the data available in YouTrack ready for consumption and manipulation by AI agents. 

For teams that prefer to work on projects from their existing LLM, IDE, or agent platform, we’ll expand the range of predefined actions available via YouTrack’s remote MCP server. We’ll also build custom integrations for popular AI-powered tools. Starting this February, YouTrack will be available as part of n8n, empowering you to automate workflows and connect YouTrack with hundreds of apps and services.

Let’s shape the future of YouTrack together

We’d love to hear from you! Your feedback shapes YouTrack’s future, and we’re always open to ideas, suggestions, and insights. Whether you want to share a feature request, an improvement suggestion, or just your thoughts, get in touch with us by commenting on this blog or using our public project tracker.

Thank you for being a part of the YouTrack community. Together, we’re building a more powerful YouTrack for 2026 and beyond.

Your YouTrack team!

Rust Learning Log: Glob Imports, Library Crates, Multiple Binaries

Today I learned about the glob operator, creating a library crate, and multiple binary crates

The Glob Operator:

The glob operator (*) lets you import everything from a module at once.

At first, it felt strong. Almost too powerful.

Up until now, Rust has been teaching me to be explicit, to name exactly what I’m bringing into scope. The glob operator relaxes that slightly.

That contrast made me think. Rust gives you convenience, but it also gives you responsibility.

Even as a learner, I can see how using a glob import might make code shorter but potentially less clear. It made me more aware of how imports shape readability.

Creating a Library Crate:
Learning how to create a library crate changed how I view Rust projects.
Before this, everything I built felt like “a program.” Now I see that Rust encourages you to build reusable logic, code meant to be consumed by other parts of the project or even other projects.

A library crate feels like saying: this code is meant to be depended on.

That shift in perspective, from writing code for execution to writing code for reuse, feels like a major step in maturity.

Multiple Binary Crates:
This one surprised me. A single project can have multiple binary crates, meaning multiple entry points.

That made me realize Rust doesn’t assume your project is a single purpose tool. You can structure a project to serve different commands, roles, or execution flows while sharing the same core logic.

It made everything I’ve learned about modules and visibility suddenly make more sense. Structure supports scale.

Some languages can have multiple main packages in a project, but they need to be in separate directories. Rust’s approach with multiple binaries in the same workspace feels similar but more integrated.

At this point, I’m starting to see Rust in layers.

Ownership taught me about memory discipline. Enums taught me about modeling uncertainty. Modules taught me about structure. Visibility taught me about boundaries.

Crates and binaries are teaching me about architecture.

Rust doesn’t just teach me how to code. It teaches me how to design systems.

I’m still learning and still early, I will get a clearer picture soon.

When you’re building projects, do you start thinking about reusability and architecture upfront, or do you refactor toward it once things get complex?


// Using the glob operator (imports everything)
use std::collections::*;

fn example_glob() {
    let mut map = HashMap::new();
    map.insert("key", "value");
}

// Library crate structure
// src/lib.rs
pub fn greet(name: &str) -> String {
    format!("Hello, {}", name)
}

pub fn farewell(name: &str) -> String {
    format!("Goodbye, {}", name)
}

// Binary crate that uses the library
// src/main.rs
use my_library::greet;

fn main() {
    println!("{}", greet("Rust"));
}

// Additional binary crate
// src/bin/other_tool.rs
fn main() {
    println!("This is a separate binary in the same project");
}

// Documentation comment with examples
/// Adds two numbers together.
/// 
/// # Examples
/// 
/// ```


/// let sum = my_library::add(2, 3);
/// assert_eq!(sum, 5);
///

pub fn add(a: i32, b: i32) -> i32 {
a + b
}




#Rust #RustLang #LearningRust #Programming #SoftwareEngineering #SoftwareArchitecture #CodeOrganization #LibraryDesign #Documentation #Blockchain #Solidity #SystemsDesign #CleanCode 

Working With STM32 Arm TrustZone-Based Projects in CLion

The Arm®v8-M architecture introduced a security extension called TrustZone®*, which splits the firmware running on the MCU into two worlds: secure and non-secure. In this blog post, I want to discuss how to work effectively on STM32 projects using this technology. We’ll get you all set up to use the latest and greatest code analysis tools with conventional debugging in CLion.

We’ll use the following setup: 

  • CLion 2026.1 EAP (Early Access Program) on Windows.
  • STM32 NUCLEO-L552ZE-Q board. 

We also need to install:

  • STM32CubeProgrammer to configure the hardware.
  • STM32CubeCLT 1.20.0 with the bundled ST-Link _gdbserver and cross-compiler to build and debug the STM32 firmware.

You can find the project we’ll be using as a showcase on GitHub. The initial stub was generated by STM32CubeMX 6.16.0. If you want to follow along, you can use a slightly older version of CLion; the minimal required version is 2025.3.2.

Understanding the STM32 TrustZone-based project structure

The secure TrustZone mode is a privileged one and can serve requests from the unprivileged non-secure mode. Why would we want to use this? The reasoning is similar to why we have a user space in good old desktop computers – we don’t trust some code enough, and we don’t want it interfering with critical tasks. Mission-critical parts go into secure mode, while the processing of some user, wireless, remote, or internet data runs in non-secure mode. Thus, we isolate the important stuff from the exposed interface, such as Wi-Fi or Bluetooth.

Even if there is a vulnerability in the internet-connected code, the important core remains unaffected. For example, even if the device’s non-secure application code is compromised, the secure bootloader allows a new, fixed application to be reflashed remotely and the device to be recovered without physical access.

So, what changes in the project structure compared to the default STM32 CMake project?

Tip: In STM32CubeMX, peripherals can be assigned to secure and non-secure zones, as shown in the image above.

The code generated by STM32CubeMX is actually two independent subprojects wrapped in a superproject that can build both. The root CMake project provides only the plumbing needed to build the two subprojects. It also contains the shared code referenced by both subprojects, such as the hardware drivers and the HAL on top of them. In CMake terms, this is done by the ExternalProject_Add directive. Here is how this superproject structure looks in CLion:

There is one important caveat, though: The subprojects are not configured until the project is actually built. This means that for CMake to report any information CLion uses for code insight, the superproject must be built first. CLion will then automatically gather the necessary information.

Configuring the project in CLion

To start, clone our example repository from GitHub and open the .ioc file as a project. The default editor that opens should greet you with the name of the project’s MCU and an option to open it with STM32CubeMX for reconfiguration.

If you don’t have access to the hardware we’re using, you can alternatively generate your own project with STM32CubeMX. Select the option to generate the project with TrustZone enabled, configure the peripherals you need, and generate for CMake (it does not matter which compiler you choose; we support both GCC and ST-ARM-CLANG).

As I’ve noted above, all we need to do now is build the project, and we’ll get code insight in the subprojects. 

If you’re new to CLion, refer to our more thorough documentation to learn how to open an STM32 project or create a new one, configure it, and build it.

Tip: CLion respects the FOLDER property of CMake targets, but this creates unnecessary structure in the project run configurations. You can go to the advanced settings and turn this feature off by disabling Group CMake run configurations by FOLDER property.

Setting up debugging

Unfortunately, the CMake targets corresponding to the secure and non-secure subprojects added to the superproject don’t include information about the compiled files. You will need to enter this manually in the run configuration. Edit both run configurations and select the corresponding binaries as the executables:

We are looking into ways to work around this limitation of CMake’s external projects. (CPP-48380).

Tip: The non-secure target depends on the secure one, so when you build the non-secure one, the secure one is built as well.

Consult the manual for the MCU you are using to learn how to enable and disable TrustZone on your particular hardware. Following the STM introductory tutorial, we used STM32CubeProgrammer to configure the following option bytes on our board: TZEN=1, SECWM2_PSTRT=0x1, and SECWM2_PEND=0x0.

If you’re using our example project or followed our instructions and opened the .ioc file as a project, you already have a debug server set up. If you started from scratch, an ST-LINK debug server should already be pre-selected and pre-configured in Settings | Debugger. It’s designed for a streamlined setup and intended to work in the majority of common cases.

However, today we are discussing a somewhat more complex case, so we need something more powerful, more… generic. If you’re following our example, you will find the Generic debug server already set up in the project – we’ll use that one. Note that you might need to enable Debug Servers in Settings | Advanced Settings | Debugger to see it.

If you started from scratch, convert the ST-LINK debug server to Generic (using the button next to the Name field). This is a much more powerful option that allows full customization. We’ll need to adjust a few things to be able to flash two images, instead of just one, as the default. We are looking into automating this process (CPP-48379).

In the Debugger tab of the generic debug server, go to the Connection section, select the Script | Custom option, and add the following:

# Connect to GDB Server
$GDBTargetCommand$

# Flash NonSecure binary
exec-file NonSecure/build/stm32l5-trustzone_NS.elf
load

# Flash Secure binary and load its symbols
file Secure/build/stm32l5-trustzone_S.elf
load

# Load symbols from NonSecure binary
add-symbol-file NonSecure/build/stm32l5-trustzone_NS.elf

# Reset the MCU
monitor reset

You can find an extended version of this script with logging echoes in our example.

The script connects to your device, uploads both non-secure and secure firmware, loads the necessary debug information, and resets the device.

Tip: The $GDBTargetCommand$ expression is an IDE macro that expands to the connection script as though it were generated in automatic mode. A preview is shown when you have automatic mode selected – for example, target remote tcp:localhost:12345.

Since we’re now performing all these steps manually, let’s disable the automatically added ones so we don’t do the same work twice. In the Device Settings tab, set Upload executable to device to Never and deselect both options for Reset.

Now, if you’ve followed our hardware setup, you should be able to set up a couple of breakpoints and start debugging!

Be careful, though, and don’t get too far ahead of yourself. The number of hardware breakpoints is limited by design (6 on our STM32L5). This means you can run out of breakpoints faster than you might expect, especially when working with shared code. Setting breakpoints in shared code consumes twice as much – the code gets compiled into both images, is flashed twice, and requires a hardware breakpoint for each code copy.

Disabling TrustZone

After following this article, you may need to disable TrustZone mode on your MCU. Here, we briefly summarize instructions from the manual mentioned above. Refer to your device’s manual for instructions on how to proceed in your case.

To disable TrustZone:

  1. Raise the readout protection level to 1 (by setting RDP to DC, 0.5, or 2), which disables the debugging of secure code. Note that setting the protection level to 2 would also disable debugging for non-secure code, which is why we didn’t do this earlier.
  2. Raise the BOOT0 pin to VDD voltage.
  3. Cycle the IPP jumper (disconnect and reconnect it). At this stage, two LEDs should illuminate. After that, you should be able to connect with the STM32CubeProgrammer in Hot-plug mode.
  4. Simultaneously set the readout level to 0 (RDP to AA) and disable TZEN. 

If necessary, you can now revert the memory security level for the second flash bank.

What’s next

Have any questions, or did something not work as described? Please leave us a comment, or visit us at Embedded World 2026, Hall 4, booth 146. We look forward to your feedback!

We plan on writing a similar walkthrough for working with dual-core MCUs and MCUs with bootflash or bootrom memory. The project structure in those cases is similar to external CMake projects, but the debugging experience differs.

As mentioned earlier, we’re actively working to improve support for STM32 projects. Any ideas on what would fit your workflow are very welcome, so please file an issue in YouTrack. 

* Arm and TrustZone are registered trademarks of Arm Limited (or its subsidiaries or affiliates) in the US and/or elsewhere.