Code Autopsy #1: How ~90 Lines Turned System Monitoring Into A Conversation

Code Autopsy #1: How 30 Lines Turned System Monitoring Into A Conversation

Part of the PC_Workman build-in-public series. Code Autopsy drops every Wednesday.

The Problem: Numbers Without Answers

You open Task Manager.

“CPU: 87%”

Cool.

But WHY 87%?

Is that normal? Should you worry? What process caused it? When did it start?

Task Manager doesn’t answer. HWMonitor doesn’t answer. MSI Afterburner doesn’t answer.

They show you WHAT is happening. Never WHY.

That’s the gap PC_Workman fills.

PC Workman 1.6.8 - hck_GPT in action. Service Setup - quick access to disable useless services, or services what you don't will use (Bluetooth, Print, fax). Today Report - Info about correctly collecting data by sessions. Daily usage averages. And Alerts from suspected spikes/moments by temperatures or voltage.

The Solution: EventDetector

After 800 hours building PC_Workman (most of it on a laptop that peaks at 94°C), I realized: users don’t need more data. They need context.

So I built EventDetector.

30 lines of Python that turn monitoring into a conversation.

Here’s how it works.

Step 1: Track YOUR Baseline (Not Generic Averages)

Most tools compare against hardcoded thresholds:

  • “50% CPU is normal”
  • “60% RAM is high”
  • “80°C is warm”

Problem: Your normal isn’t my normal.

A gaming PC idling at 30% CPU? Normal.

A lightweight laptop idling at 30% CPU? Something’s wrong.

EventDetector tracks YOUR baseline from the last 10 minutes:

def _get_baseline(self, now):
    """Get recent baseline averages from minute_stats.
    Cached for 60 seconds to avoid excessive queries.
    """
    cutoff = now - SPIKE_BASELINE_WINDOW  # 10 minutes

    rows = conn.execute("""
        SELECT AVG(cpu_avg) as cpu_avg, 
               AVG(ram_avg) as ram_avg,
               AVG(gpu_avg) as gpu_avg,
               AVG(cpu_temp) as cpu_temp, 
               AVG(gpu_temp) as gpu_temp
        FROM minute_stats
        WHERE timestamp >= ?
    """, (cutoff,)).fetchone()

    return baseline_cache

Key insight: The baseline is YOU. Not everyone. Just you.

PC Workman 1.6.8 - Events detector for hck_GPT insights. Based on long-term monitoring: CPU, GPU, RAM. EventDetector code with highlights on baseline, delta, rate limiting, severity

Step 2: Calculate Delta (Current vs YOUR Normal)

Once we have YOUR baseline, detecting spikes is simple math:

def _check_metric(self, now, metric_name, current_val, 
                  baseline_val, threshold, description):
    """Check if a metric exceeds its threshold above baseline"""

    delta = current_val - baseline_val

    if delta < threshold:
        return  # No spike - you're within YOUR normal range

Example:

  • Your CPU baseline (last 10 min): 42%
  • Current CPU: 87%
  • Delta: +45%
  • Threshold: 20%

Result: Spike detected. But we’re not done yet.

Step 3: Rate Limiting (No Alert Spam)

Early versions of EventDetector had a problem: alert spam.

Chrome spikes CPU every 30 seconds? You’d get 120 alerts per hour.

Useless.

Solution: Rate limiting.

# Rate limiting: {metric_name: last_event_timestamp}
self._last_event_time = {}

def _check_metric(self, ...):
    # ... delta calculation ...

    # Rate limiting
    last_time = self._last_event_time.get(metric_name, 0)
    if now - last_time < SPIKE_COOLDOWN:  # 5 minutes
        return  # Too soon since last alert

    # Log the event
    self._last_event_time[metric_name] = now

Result: Max 1 alert per metric per 5 minutes. No spam.

Step 4: Severity Levels (Critical vs Warning vs Info)

Not all spikes are equal.

CPU spiking 21% above baseline? Worth noting.

CPU spiking 60% above baseline? Drop everything.

EventDetector categorizes:

# Determine severity
if delta >= threshold * 2:
    severity = 'critical'  # 🔴
elif delta >= threshold * 1.5:
    severity = 'warning'   # ⚠️
else:
    severity = 'info'      # ℹ️

Example thresholds:

  • CPU threshold: 20%
  • Delta 40%+: Critical
  • Delta 30%+: Warning
  • Delta 20-29%: Info

Result: Alerts match urgency.

The Final Output: Context, Not Just Numbers

Here’s what you see in PC_Workman when a spike happens:

Before (Task Manager):

CPU: 87%

After (PC_Workman):

⚠️ CPU spike: 87% (baseline: 42%, delta: +45%)
Chrome.exe - started 3 hours ago

Same data. Different story.

One gives you anxiety. The other gives you action.

PC Workman 1.6.8 - My PC - Center of Actions.
STATS & ALERTS - Long term monitoring your components usage, process usage. And mainly time-travel TEMP and Voltages alerts about spikes, or suspected moments. Optimization & Services - For optimize and improve your PC performance. First Setup & Drivers - All for setup your new device/new os. Stability Tests - For check about correctly working of PC Workman and Database check. Your Account-Details - Soon :)

Implementation Notes

Handles 5 Metrics With Same Logic

The beauty of this design: reusable.

Same _check_metric function handles:

  • CPU usage
  • RAM usage
  • GPU usage
  • CPU temperature
  • GPU temperature
def check_and_log_spike(self, cpu_avg, ram_avg, gpu_avg,
                        cpu_temp=None, gpu_temp=None):
    baseline = self._get_baseline(now)

    # Check each metric with same logic
    self._check_metric(now, 'cpu', cpu_avg, 
                      baseline['cpu_avg'], 
                      SPIKE_THRESHOLD_CPU, 'CPU usage')

    self._check_metric(now, 'ram', ram_avg, 
                      baseline['ram_avg'],
                      SPIKE_THRESHOLD_RAM, 'RAM usage')

    # ... and so on

Clean. Maintainable. Scalable.

Performance: Cached Baselines

Baseline queries hit SQLite. Could be slow.

Solution: 60-second cache.

if now - self._baseline_cache_time < 60 and self._baseline_cache:
    return self._baseline_cache  # Use cached data

Result: Query once per minute, not once per second.

Storage: SQLite Events Table

All events logged to database:

INSERT INTO events
(timestamp, event_type, severity, metric, value, 
 baseline, process_name, description)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)

Benefits:

  • Historical tracking (what spiked last week?)
  • Pattern detection (Chrome spikes every Tuesday?)
  • Exportable data

What I Learned Building This

1. Users Don’t Need More Data

Early versions of PC_Workman showed 20+ metrics.

Users ignored them all.

Lesson: Context, no quantity.

2. Rate Limiting Is User Experience

First version: no rate limiting.

Result: 500 alerts per hour. Unusable.

Lesson: Silence is a feature.

3. Personalization.

“50% CPU is high” works for nobody.

YOUR 50% vs MY 50% = different stories.

Lesson: Baselines must be personal.

PC Workman 1.6.8 - hck_GPT Insights

The Numbers

EventDetector stats:

  • ~30 lines core logic
  • Handles 5 metrics
  • Max 1 alert per metric per 5 min
  • Baseline cached 60 sec
  • 3 severity levels

PC_Workman stats:

  • 800+ hours development
  • Built on 94°C laptop
  • v1.6.8 current (v2.0 -> Microsoft Store, Q3 2026)
  • 60+ downloads
  • 17 stars
  • Open source, MIT licensed

Try It Yourself

PC_Workman is open source.

EventDetector is in hck_stats_engine/events.py.

Download, run, break it, improve it.

GitHub: github.com/HuckleR2003/PC_Workman_HCK
File what I show you: PC_Workman_HCK/hck_stats_engine/events.py

Building in public. Code Autopsy every Wednesday.

Follow the journey:

  • Twitter: @hck_lab
  • LinkedIn: Marcin Firmuga
  • Everything: linktr.ee/marcin_firmuga

Next Week: Wednesday Code Autopsy #2

Topic: ProcessAggregator – how PC_Workman tracks which apps eat your CPU without destroying performance.
See you Wednesday.

Questions? Comments? Roasts? I’m building in public. Feedback welcome.

About the Author

I’m Marcin Firmuga. Solo developer and founder of HCK_Labs.

I created PC Workman , an open-source, AI-powered
PC resource monitor
built entirely from scratch on dying hardware during warehouse
shifts in the Netherlands.

This is the first time I’ve given one of my projects a real, dedicated home.

Before this: game translations, PC technician internships, warehouse operations in multiple countries, and countless failed projects I never finished.

But this one? This one stuck.
800+ hours of code. 4 complete UI rebuilds. 16,000 lines deleted.
3 AM all-nighters. Energy drinks and toast.

And finally, an app I wouldn’t close in 5 seconds.
That’s the difference between building and shipping.

PC_Workman is the result.

WebStorm 2026.1: Service-powered TypeScript Engine, Junie, Claude Agent, and Codex in the AI chat, Framework Updates, and More

WebStorm 2026.1 is now available!

This release focuses on the everyday web development workflows where IDE support matters most, helping you stay productive in large TypeScript projects, making it easy to keep up with frameworks that evolve quickly, and bringing AI tools into the IDE so you don’t have to switch contexts.

The highlights of this release include:

AI-powered development

  • Junie, Claude Agent, and Codex available directly in the AI chat
  • ACP Registry for discovering and installing agents
  • Next edit suggestions

Better TypeScript support

  • Service-powered TypeScript engine enabled by default
  • Alignment with TypeScript 6
  • String-literal import/export support

Frameworks and technologies

  • Highlighting for new React directives
  • Angular 21 template syntax
  • Vue TypeScript integration updates
  • Astro language server configuration
  • Svelte generics support
  • Support for modern CSS color spaces

This release also includes numerous fixes and quality-of-life improvements to the support for TypeScript, React, Angular, Vue, Astro, Prettier, and more.

Want a guided tour of WebStorm 2026.1? Check out our livestream for a detailed walkthrough of the biggest updates in this release.

You can update to WebStorm 2026.1 via the Toolbox App or download it directly from our website.

DOWNLOAD WEBSTORM 2026.1

Highlights

AI

Junie, Claude Agent, and Codex available directly in the AI chat

Different AI tools are good at different tasks, but switching between them can break your flow. In addition to Junie, Claude Agent, and most recently Codex, you can now choose from more agents in the AI chat, including Cursor, GitHub Copilot, and dozens of external agents supported via the Agent Client Protocol.

With the new ACP Registry, you can discover available agents and install them in just one click.

Next edit suggestions

Next edit suggestions are now available without consuming the AI quota of your JetBrains AI Pro, Ultimate, and Enterprise subscriptions. These suggestions go beyond traditional code completion for JavaScript, TypeScript, HTML, and CSS. Instead of updating only what’s at your cursor, they intelligently apply related changes across the entire file, helping you keep your code consistent and up to date with minimal effort.

This natural evolution of code completion delivers a seamless Tab Tab experience that keeps you in the flow.

TypeScript

More accurate and responsive TypeScript support

Large TypeScript codebases put constant pressure on the editor. WebStorm now uses the service-powered TypeScript engine by default, improving correctness while reducing CPU usage in large projects. That keeps navigation, inspections, and refactorings more responsive in everyday work.

Furthermore, if you use the TypeScript Go-based language server, WebStorm now also shows its inlay hints directly in the editor (WEB-75982).

TypeScript 6 support

Compiler defaults shape how a project behaves, so the editor needs to stay aligned with them. WebStorm 2026.1 follows the TypeScript 6 changes affecting the default types value (WEB-75541) and rootDir (WEB-75865). It also starts the process of bringing TypeScript config handling into alignment with the direction of TypeScript 7’s changes to baseUrl (WEB-76504).

String-literal import and export specifiers

WebStorm now understands string-literal names in import and export specifiers, so parsing, highlighting, navigation, and refactoring all work as expected for this standards-compliant syntax (WEB-72912, WEB-76597).

Example:

export { a as "a-b" };
import { "a-b" as a } from "./file.js";

Frameworks and technologies

Support for new React directives

Directive-based behavior is becoming more common in React, and you need to be able to spot it easily when reading a component. WebStorm now highlights the use memo and use no memo directives alongside use client and use server (WEB-75595).

Support for modern Angular template syntax

Angular templates keep getting more expressive, and the IDE’s support needs to keep pace. WebStorm 2026.1 adds support for arrow functions (WEB-76240), the instanceof operator (WEB-76528), regular expressions (WEB-75718), and spread syntax (WEB-76241) in Angular 21.x templates.

Updated Vue TypeScript integration

Reliable support in .vue files depends on staying in sync with the Vue TypeScript toolchain. WebStorm now uses @vue/typescript-plugin 3.1.8 ensuring compatibility with the latest features (WEB-75948).

Configurable Astro language server

Some Astro projects need more control over language server behavior than the defaults can provide. WebStorm now lets you pass your JSON configuration to the Astro language server directly from the IDE (WEB-75717).

Improved Svelte support

Working with typed Svelte components is easier when the IDE understands the framework-specific typing model. WebStorm now supports the generics attribute in <script> tags, enabling usage search, navigation to declarations, use of the Rename refactoring for type parameters, and the parsing of TypeScript constructs in the attribute value.

The IDE now also reports common problems relating to this feature, offers support for the @attach directive, and includes updates to the bundled svelte-language-server and typescript-svelte-plugin packages.

Modern CSS color support

Modern CSS color features are useful only if the editor can validate and preview them properly. WebStorm now supports the color() function in swatches and recognizes additional predefined CSS color spaces (WEB-76615).

That means newer color formats get proper previews and validation in the editor.

Editor and tooling improvements

Productivity

Native Wayland support

WebStorm now runs natively on Wayland by default. This transition provides Linux professionals with ultimate comfort through sharper HiDPI and better input handling, and it paves the way for future enhancements like Vulkan support.

While Wayland provides benefits and serves as a foundation for future improvements, we prioritize reliability: The IDE will automatically fall back to X11 in unsupported environments to keep your workflow uninterrupted. Learn more.

In-terminal completion

Stop memorizing commands. Start discovering them. In-terminal completion helps you instantly explore available subcommands and parameters as you type. Whether you’re working with complex CLI tools like Git, Docker, or kubectl or using your own custom scripts, this feature intelligently suggests valid options in real time.

Previously introduced for Bash and Zsh shells, it is now also available in PowerShell.

Sunsetting of Code With Me

As we continue to evolve our IDEs and focus on the areas that deliver the most value to developers, we’ve decided to sunset Code With Me, our collaborative coding and pair programming service. Demand for this type of functionality has declined in recent years, and we’re prioritizing more modern workflows tailored to professional software development.

As of version 2026.1, Code With Me will be unbundled from all JetBrains IDEs. Instead, it will be available on JetBrains Marketplace as a separate plugin. 2026.1 will be the last IDE version to officially support Code With Me, as we gradually sunset the service.

Read the full announcement and sunset timeline in our blog post.

Final words

WebStorm 2026.1 focuses on the places where the IDE’s quality most affects your everyday work, ensuring type checking stays responsive, framework support keeps up with the ecosystem, and your workflows let you stay in the editor instead of switching tools. For the complete list of changes, see the full release notes.
If you try the latest version in a real project, let us know what you like and where you run into trouble. Your feedback is what shapes the next release.

Expanding Our Core Web Development Support in PyCharm 2026.1

With PyCharm 2026.1, our core IDE experience continues to evolve as we’re bringing a broader set of professional-grade web tools to all users for free. Everyone, from beginners to backend-first developers, is getting access to a substantial set of JavaScript, TypeScript, and CSS features that were previously only available with a Pro subscription.

React, JavaScript, TypeScript, and CSS support

Leverage a comprehensive set of editing and formatting tools for modern web languages within PyCharm, including:

  • Basic React support with code completion, component and attribute navigation, and React component and prop rename refactorings.
  • Advanced import management:
    • Enjoy automatic JavaScript and TypeScript imports as you work.
    • Merge or remove unnecessary references via the Optimize imports feature.
    • Get required imports automatically when you paste code into the editor.
  • Enhanced styling: Access CSS-tailored code completion, inspections, and quick-fixes, and view any changes in real time via the built-in web preview.
  • Smart editor behavior: Utilize smart keys, code vision inlay hints, and postfix code completions designed for web development.

Navigation and code intelligence

Finding your way around web projects is now even more efficient with tools that allow for:

  • Pro-grade navigation: Use dedicated gutter icons for Jump to… actions, recursive calls, and TypeScript source mapping.
  • Core web refactorings: Perform essential code changes with reliable Rename refactorings and actions (Introduce variable, Change signature, Move members, and more).
  • Quality control: Maintain high code standards with professional-grade inspections, intentions, and quick-fixes.
  • Code cleanup: Identify redundant code blocks through JavaScript and TypeScript duplicate detection.

Frameworks and integrated tools

With the added essential support for some of the most popular frontend frameworks and tools, you will have access to:

  • Project initialization: Create new web projects quickly using the built-in Vite generator.
  • Standard tooling: Standardize code quality with integrated support for Prettier, ESLint, TSLint, and StyleLint.
  • Script management: Discover and execute NPM scripts directly from your package.json.
  • Security: Check project dependencies for security vulnerabilities.

We’re excited to bring these tried and true features to the core PyCharm experience for free! We’re certain these tools will help beginners, students, and hobbyists tackle real-world tasks within a single, powerful IDE. Best of all, core PyCharm can be used for both commercial and non-commercial projects, so it will grow with you as you move from learning to professional development.

IntelliJ IDEA 2026.1 Is Out!

IntelliJ IDEA 2026.1 is here, and it comes packed with an array of new features and enhancements to elevate your coding experience! 

You can download this latest release from our website or update to it directly from inside the IDE, via the free Toolbox App, or using snap packages for Ubuntu.

As always, all new features are brought together on the What’s New page, with detailed explanations and demos.

Explore the What’s New page

In addition to the What’s New page, our developer advocates got together to discuss and demonstrate the key updates. If you prefer watching to reading, check it out.

IntelliJ IDEA 2026.1 brings built-in support for more AI agents, including Codex, Cursor, and any ACP-compatible agent, and delivers targeted, first-class improvements for Java, Kotlin, and Spring. The release also advances IntelliJ IDEA’s mission to provide support for the latest languages and tools from day one.

Any agent, built-in:

  • ACP Registry: Browse and install AI agents in one click.
  • Git worktrees: Work in parallel branches and hand one off to an agent while you keep moving in another.
  • Database access for AI agents: Let Codex or Claude Agent query and modify your data sources natively.

Intelligence in the platform:

  • Quota-free next edit suggestions: Propagate changes throughout a given file with IDE-driven assistance.
  • Spring runtime insight: Inspect injected beans, endpoint security, and property values without pausing execution.
  • Kotlin-aware JPA: Detect and fix Kotlin-specific pitfalls in Jakarta Persistence entities.

First-class language support:

  • Java 26: Enjoy day-one support, including preview features.
  • Kotlin 2.3.20: Enjoy day-one support, including experimental features.
  • C/C++ in IntelliJ IDEA: Access first-class C/C++ coding assistance for multi-language projects.
  • Support for JavaScript without an Ultimate subscription.

Productivity and environment:

  • Expanded command completion, now with AI actions, postfix templates, and config file support.
  • Better performance for large-scale TypeScript projects.
  • Native Dev Container workflow: Open containerized projects as if they were local.

Along with new features, 2026.1 delivers numerous stability, performance, and usability improvements across the platform. These are described in a separate What’s Fixed blog post.

As always, your feedback plays an important role in shaping IntelliJ IDEA. Tell us what you think about the new features and help guide future improvements.

Join the discussion on X, LinkedIn, or Bluesky, and if you encounter any issues, please report them via YouTrack.

For full details of the improvements introduced in version 2026.1, refer to the release notes.

Thank you for using IntelliJ IDEA. Happy developing!

What’s fixed in IntelliJ IDEA 2026.1

Welcome to the overview of fixes and improvements in IntelliJ IDEA 2026.1.

In this release, we have resolved over 1,000 bugs and usability issues, including 334
reported by users. Below are the most impactful changes that will help you work with greater confidence every day.

Performance

We continue to prioritize reliability, working to improve application performance, fix freezes, optimize operations, and cover the most common use cases with metrics. Using our internal tools, we identified and resolved 40 specific scenarios that caused UI freezes.

However, internal tooling alone cannot uncover every issue. To identify additional cases, we enabled automatic error and freeze reporting in EAP builds. By collecting this data, we gain a real, unfiltered picture of what’s going wrong, how often it happens, and how many users are affected. This allows us to prioritize fixes based on real impact rather than guesswork.

As always, we prioritize your privacy and security. When using EAP builds, you maintain full control and can disable automatic error and freeze reporting in Settings | Appearance & Behavior | System Settings | Data Sharing. Thank you for helping us build better tools!

Terminal

Version 2026.1 enhances your productivity by streamlining the experience offered by the terminal, a crucial workspace for developer workflows involving CLI-based AI agents.

First, we fixed the Esc behavior – it is now handled by the shell instead of switching focus to the editor, so it does not break the AI-agent workflow. Additionally, Shift+Enter now inserts a new line, making it easier to write multi-line prompts and commands directly. This behavior can be disabled in Settings | Advanced Settings | Terminal.

We also improved the detection of absolute and relative file paths in terminal output, allowing you to open files and folders with a single click in any context. When you encounter compilation or build errors, or submit a task to an AI coding agent, you can jump directly to the referenced file and review or fix issues faster.

Link navigation is activated by holding Ctrl (or Cmd on macOS) and clicking – just like in external terminals.

JVM language support

Better Kotlin bean registration support

Kotlin’s strong DSL capabilities are a perfect fit for Spring Framework 7’s BeanRegistrar API. In 2026.1, we’ve made working with programmatic registration as productive as annotation-based configuration.

The IDE ensures complete visibility into your application structure thanks to the Structure tool window, providing better endpoint visibility, intuitive navigation with gutter icons, integrated HTTP request generation, and path variable support.

New Kotlin coroutine inspections

To help maintain code quality, we’ve introduced a set of new inspections for the Kotlin coroutines library, covering common pitfalls.

Read more about coroutine inspections in this article.

Scala

Working with sbt projects inside WSL and Docker containers is now as smooth as working with local projects. We’ve also improved code highlighting performance and sped up sbt project synchronization.

To reduce cognitive load and provide a more ergonomic UI, we’ve redesigned the Scala code highlighting settings. A new Settings page consolidates previously scattered options, making them cleaner, more intuitive, and easier to access.

You can now disable built-in inspections when compiler highlighting is sufficient, or configure compilation delay for compiler-based highlighting. Settings for Scala 2 and Scala 3 projects are now independent, and the type-aware highlighting option has been integrated with the rest of the settings.

You can read more about these updates this article.

Spring

Spring support remains a core focus for IntelliJ IDEA. We are committed to maximizing reliability and reducing friction in your daily development.

In this release, we made a dedicated effort to address issues related to running Spring Boot application from the IDE. There are now even fewer reasons to run your application in the terminal – just run it in the IDE and use the debugger when you need deeper insights.

Spring Boot 4 API versioning support

This is a new Spring Boot feature, and we keep improving its support based on your feedback. In this version, we added .yml files support for version configuration, fixed false positives and added a couple of useful inspections, so you get an instant feedback about issues without running the app.

Flyway DB Migrations

To ensure a reliable and distraction-free experience, the IDE now verifies migration scripts only when a data source is active, eliminating false-positive errors when the data source is disconnected.

At the same time, Flyway scripts got correct navigation to the table definitions, and SQL autocompletion for any files and tables defined in them.

User interface

With IntelliJ IDEA 2026.1, we’ve continued to prioritize ultimate comfort and an ergonomic UI, ensuring your workspace is as accessible and customizable as your code.

The long-awaited ability to sync the IDE theme with the OS is now available to Linux users, bringing parity with macOS and Windows. Enable it in Settings |Appearance & Behavior | Appearance.

The code editor now supports OpenType stylistic sets. Enjoy more expressive typography with your favorite fonts while coding. Configure them via Editor |Font, and preview glyph changes instantly with a helpful tooltip before applying a set.

Windows users who rely on the keyboard can now bring the IDE’s main menu into focus by pressing the Alt key. This change improves accessibility for screen reader users.

Version control

We continue to make small but impactful improvements that reduce friction and support your everyday workflow.

You can now amend any recent commit directly from the Commit tool window – no more ceremonies involving interactive rebase. Simply select the target commit and the necessary changes, then confirm them – the IDE will take care of the rest.

In addition to Git worktrees, we’ve improved branch workflows by introducing the Checkout & Update action, which pulls all remote changes.

Furthermore, fetching changes can now be automated – no need for a separate plugin. Enable Fetch remote changes automatically in Settings | Git.

In-IDE reviews for GitLab merge requests now offer near feature parity with the web interface. Multi-line comments, comment navigation, image uploads, and assignee selection when creating a merge request are all available directly in the IDE, so you can stay focused without switching to the browser.

The Subversion, Mercurial, and Perforce plugins are no longer bundled with the IDE distribution, but you can still install them from JetBrains Marketplace.

Databases

We’ve enhanced the Explain Plan workflow with UI optimizations for the Query Plan tab, an additional separate pane for details about the execution plan row, inner tabs that hold flame graphs, and an action to copy the query plan in the database’s native format.

JetBrains daemon

IntelliJ IDEA 2026.1 includes a lightweight background service – jetbrainsd – that handles jetbrains:// protocol links from documentation, learning resources, and external tools, opening them directly in your IDE without requiring you to have the Toolbox App running.

Sunsetting of Code With Me

As of version 2026.1, Code With Me will be unbundled from all JetBrains IDEs and will instead be available as a separate plugin on JetBrains Marketplace. Version 2026.1 will be the last IDE release to officially support Code With Me as we gradually sunset the service.

Read the full announcement and timeline in our blog post.

Enhanced AI management and analytics for organizations

We are working hard to provide development teams with centralized control over AI and built-in analytics to understand adoption, usage, and cost. As part of the effort, we’ve introduced the JetBrains Console. It adds visibility into how your teams use AI in practice, including information about active users, credit consumption, and acceptance rates for AI-generated code.

The JetBrains Console is available to all organizations with a JetBrains AI subscription, providing the trust and visibility required to manage professional-grade development at any scale.

That’s it for this overview.

Let us know what you think about the fixes and priorities in this release. Your feedback helps us steer the product so it works best for you!

We’d also love to hear your thoughts on this overview and the format in general.

Update to IntelliJ IDEA 2026.1 now and see how it has improved. Don’t forget to join us on X , Bluesky, or LinkedIn and share your favorite updates.

Thank you for using IntelliJ IDEA!

Testing Font Scaling For Accessibility With Figma Variables

Building a true culture of digital accessibility in a company is a mission of resilience and perseverance. It’s not difficult for the discourse on accessibility to fall into the usual clichés. Accessibility is very important for people. The accessibility of digital products and services promotes inclusion. Or even, all professionals on the teams should be involved in accessibility work. Of course. No one in their right mind will dispute any of these statements (I hope).

However, the second part of this conversation, which very few companies reach, is “how?” How do we make this happen in the midst of the day-to-day work of digital transformation teams, which, as we all know, are immersed in demanding scripts, often with a very limited number of people available? Most of the time, the choice ends up being between “we do this” and “that.” And it shouldn’t, because, in these cases, I never saw accessibility winning in this equation.

It shouldn’t be this way. You don’t need to be this way. First of all, because choosing between accessibility and anything else isn’t the right choice. Accessibility is no longer just another feature to be added to the others. It’s an added value for the business and, currently, a legal obligation that can have serious consequences for companies. On the other hand, there are intelligent, optimized, and impactful ways to incorporate accessibility principles into the natural dynamics of teams. It’s possible to work on accessibility without turning team operations upside down. In essence, that’s what AccessibilityOps does. Empowering people and providing teams with simple processes so they can integrate accessibility work into their daily routines without disproportionate effort.

Accessibility And Design

Working on digital accessibility in design can involve several actions. It’s clear that we need to pay particular attention to color and how it’s used to convey meaning. Of course, the interaction sizes of elements must be comfortable. But, most importantly, we must think about design from a versatile perspective. An interface isn’t a poster. We can control many aspects of that design, but how users interact with the interface is subject to an endless number of variables. The type of device, context, purpose, network quality, etc. All of this greatly affects each person’s experience and interaction. Along with all this, when digital accessibility concerns are brought into the design process, it adds even more variables.

People often use what are called assistive technologies and strategies. Basically, these are technological tools or, at the very least, “tricks” that people resort to in order to find more comfortable usage models. The famous screen readers, commonly associated with the use of blind people (but which are not only useful to them), for example, are an assistive technology. Changing colors or color contrasts between different elements is also an assistive technology. Increasing the font size (which we discussed in this text) is another example. There are countless assistive technologies and strategies. Almost as many as the different contexts of use for each person.

We Don’t Control Everything

In other words (and this is the “bad news” for us designers), “our design” is subject, from the users’ perspective, to transformations that we don’t control. It will be “transformed” by the user, ensuring that they can interact with the application and everything it offers in the most comfortable way possible. And that’s a good thing. If this happens and everything goes well, we will have surely done our accessibility work very well, and we all deserve congratulations. If the user applies any of these support technologies and strategies and still cannot use the digital application, it’s a sign that something is not working as it should.

Oh, and speaking of which. Don’t even think about blocking the use of these technologies or support strategies. They may be “destroying” your beautiful design, but they are allowing more and more people to actually use the app. In the end, wasn’t that exactly what we promised we wanted to do? Design for (all) people. Without exception?

Increase Font Size

How many times have we heard someone — friends, family, or even colleagues — complaining that this or that text is too small? Text plays a very important role in the digital experience. Much information is conveyed through text: instructions for use, button captions, or interactive elements. All of this uses text as a communication tool. If reading all these elements is difficult, naturally, the experience is severely impaired.

Comfortable text reading, regardless of its function, is a non-negotiable principle. This reading can be facilitated by using comfortable sizes in the design. However, supporting technologies and strategies, through the functionality of increasing font size, can also help improve readability. According to APPT data, 26% of Android and iOS mobile device users increase the default font size (data from February 2026). One in four users increases the font size on their smartphone. This is a very significant sample of people, making this functionality unavoidable in design processes.

Compliance With Guidelines

Increasing font size in interfaces can represent a huge design challenge. It’s important to understand that, suddenly, some text elements, due to user actions, can double in size from their initial size.

“With the exception of captions and text images, text can be resized without assistive technology up to 200% without loss of content or functionality.”

— Success criterion 1.4.4, “Resizing Text” of the Web Content Accessibility Guidelines (WCAG), version 2.2

This success criterion is at the AA compliance level, meaning this is an absolutely mandatory feature according to any legal framework.

It’s easy to understand the 200% in this success criterion. If we assume we design the interfaces at a 100% scale, meaning the element size is the initial size, then increasing the text by up to 200% will correspond to doubling the initial size. Other enlargement scales can also be used, such as 120%, 140%, and so on. In other words, we have to ensure that users can increase the text to double its initial size through supporting technologies or strategies (and this is not a minor detail).

To comply with this standard, we don’t need to provide text size increase tools in the interfaces. In practice, these features are nothing more than redundancy. Devices already allow this to be done in a standardized way. Users who really need this setting know it (because, without it, their lives would be much more difficult). Well, they already have this setting applied across their device. And that means we can eliminate these additional interface elements, simplifying the experience.

Standardized Access

An important concept to remember about assistive technologies, particularly in this case regarding increasing font size, is that most devices already have many of these tools installed by default. In other words, in many cases, users don’t need to purchase their own software or buy a specific type of device just to have this functionality.

Whether on mobile devices or even in web browsers, in the vast majority of cases, it’s easy to find installed features that allow you to increase the default font size we’re using throughout the interface. This principle of increasing font size can be applied to digital products, such as apps, or even to any type of website running on the standard web browsers used today.

iPhones

On iPhone devices, the font size increase feature is integrated by default. To use this feature, simply access the “Settings” panel, select “Accessibility,” and within the “Vision” options group, access the “Text Size and Display” feature and configure the desired font size increase on that screen.

Google Chrome

Web browsers also offer, by default, the functionality to increase font size. For example, in Google Chrome, this feature is available in the “Options” panel, specifically in the “Appearance” area. In the list of options that appear in this group, simply select the “Font size” option. Normally, the “Medium — Recommended” option will be selected. You can change this setting to any other available font size. Try, for example, the “Very large” option.

Test In Figma

To ensure that digital accessibility work becomes effective in the daily lives of teams, it is essential to find simple work processes. Actions or initiatives that can be integrated into the team’s routine, that address accessibility in an integrated way, and do not require a dramatic transformation of the current reality. If that were necessary, he believes, it wouldn’t happen most of the time. Therefore, designing simple work processes is half the battle for accessibility to truly happen, in this case, also within a design team.

Regarding testing font size increases in design, we have extraordinary tools at our disposal today. Those who remember the days of designing complex interfaces in Adobe Photoshop will recognize the differences in the tools we have today (and thankfully so). It’s now possible, through tools like Figma, to create such dynamism in design that testing font size increases for accessibility becomes almost unavoidable for the team.

Note: To take this test, you need to have a strong grasp of Figma’s text styles, auto layouts, and variables. These three are fundamental tools for success without much extra effort. If you haven’t yet mastered these features, it’s highly recommended that you start there. Don’t skip steps. Learning is a gradual process that must be followed in a structured, step-by-step manner.

Where Do We Want To Go?

The font size increase test in Figma that we want to perform is simple. We want to have a set of variables available for all the text styles we use in the interface, allowing us to choose whether we want to see the interface with the text at a scale of 100%, 120%, 140%, 160%, 180%, or 200%. As we apply this set of variables (much like applying variables for light and dark mode), we observe the transformations of the text in the interface and understand to what extent adaptations are needed in each version of the interface with different typographic scales.

How Do We Make This Happen?

For this test to go so smoothly, you need to do some groundwork. Design systems can greatly help optimize much of this initial work. But I won’t lie to you. For the test to work well, your design needs to have a very serious level of organization and systematization.

This isn’t really a guide, because each team will have its own work model, and these recommendations can be applied in different ways (and that’s okay). However, for this test to work, it’s important to ensure certain assumptions in the design. To help you phase the implementation of this test model, here are some steps to follow. Step-by-step instructions to guide you in organizing your files and ensuring you can fully execute this test in the simplest and most practical way possible.

1. Designing The Interfaces

It all starts with the design. Before any testing, the focus should, as it should, be on the design of each interface that we will want to test later. At this stage, there is still no specific concern with the font size increase test that we will perform later. Naturally, all interface design should, from the outset, follow the most basic accessibility recommendations applied to design.

2. Apply Auto Layouts To All Elements

In every screen design you create, you’ll need to ensure you apply auto layouts perfectly. This is a very important step. It’s this consistent application of auto layouts to the entire structure and design elements that will later guarantee the scalability of the interface when we start testing font size increases. You really can’t underestimate this step. If you don’t pay it the attention it deserves, you’ll see when we test typographic scaling in the interfaces, everything breaking down like an elephant in a china shop.

3. Structuring And Applying Text Styles

To perform our font size increase test, we’ll also need you to have applied text styles to each interface design. You probably even started creating them as you were drawing. Great. If you haven’t done so, it’s important that you do it now. For the test to work perfectly, we really need this. Don’t leave any text element in the design without a text style applied.

4. Define The Set Of Variables 100%

This test forces a fairly high degree of optimization. In practice, this means we will have to use Figma variables for all the characteristics of the text styles we have in the interface. At this stage, you must define Figma “number” variables for at least the font-size and line-height of the text styles you applied to the drawing. With this step, you are defining the font size increase scale values for a 100% visualization model, that is, the initial and reference version of the drawing. It is important that you structure these variables for each text style in the drawing because, subsequently, we will have to consider the enlargement scale of each of these text elements.

5. Apply The Variables To The Text Styles

Having defined the variables for the 100% scale text styles, you must now apply them to the elements of the text styles already created. Don’t forget to apply variables at least to the font-size and line-height characteristics. If you have more typographical variables, that’s fine. But you should at least have variables applied to font-size and line-height. This is really very important.

6. Define The Variables For Increasing The Text Size

Now that you have the variables applied to the 100% scale text styles, the next step is to create the variables for the other font size increase scales. In practice, you have to create the variables that will tell the system what font size each text style will grow to when the increase scale is 120%, 140%, 160%, etc.

To define the font-size and line-height values, simply multiply the initial value by the scale percentage. For example, if a text style has a font-size of 16px, the size for the 120% scale will be 16 multiplied by 1.2, which gives a result of 19.2. Repeat this calculation for all font-size and line-height values of the font size increase scale percentages you choose.

You can also choose whether or not to apply rounding to the final values. This is an approximate test, and therefore any differences that may arise from rounding will not affect the final perception of the test result.

7. Apply Variables To Different Scale Versions

The moment of truth has arrived. The next step is to understand if we have everything working so that the test runs perfectly. Therefore, you should copy the original interface and apply the set of variables for each of the font size increase rates that make sense to you. Repeat this process for all the font size increase percentages you have defined.

As a suggestion, you can use the 120%, 140%, 160%, 180%, and 200% increase percentages as a reference. If you want to simplify, you can reduce the number of scaling percentages you are working with. Regardless of the number of percentages you are working with, you should always work with the minimum of 100% and 200% scales.

8. Identify Areas For Improvement

By applying different font size increase scales to the same screen, it’s easy to understand where improvements might be needed. This is where the real test of increasing font size in interface design and the most interesting accessibility work begins.

In your analysis of the various screens, keep some important aspects in mind:

  • The fact that the text appears gigantic isn’t a problem and doesn’t “ruin” the design. Remember that this can mean the difference between someone being able to use a particular product or service or not.
  • An accessibility problem exists when increasing the font size makes it impossible for the user to read certain texts or to activate certain controls.
  • For text elements that are already very large, increasing the font size might not make sense. Doing so could make those elements disproportionate, which wouldn’t improve readability (since they are already a good size) and would occupy completely unnecessary space.
  • If there are elements that appear to be popping out of the screen, the first step is to confirm how you are applying auto layout. Many design aspects can be easily resolved with the proper use of auto layout.
  • Regardless of the scale of font size increase, it is essential to maintain the visual hierarchy of the typography, as this readability is important for perceiving the different levels of information present on the screen.
  • This test can help identify elements that may need adjustments directly in the code to function well at a given scale of increase. Not everything can be solved through design alone, and that’s perfectly fine. Accessibility is essentially a team effort.

9. Make Corrections And Adjustments To The Design

Finally, based on the various screens with different text enlargement scales applied, you can make the design changes that make sense. Some of these adjustments may only be necessary in code. In these cases, you document all these suggestions and pass them on to the development team. It is also crucial to reinforce (again) that some of the problems you may encounter in the design can be quickly resolved in the design process, with the simple and correct application of auto-layout properties.

10. Go Back To The Beginning And Repeat The Process

This is a cyclical approach. This means you should repeat these steps, or variations thereof, as many times as necessary throughout the project. It’s natural that, over time and with process optimization, some of these steps will cease to make sense. That’s absolutely not a problem. But the most important thing to realize here is that accessibility and this process of testing font size increases shouldn’t be done just once, and that’s it. It’s a test to be done many, many times throughout the day-to-day work of each project and team.

The Role Of Design Systems

At first glance, this list of steps might seem like a complex exercise. But it’s not. This is because the vast majority, if not all, of these steps are easy to execute in any context where a design system exists. In fact, design systems have become an unavoidable standard in the Product Design industry. We can discuss what each team calls a design system, but the truth is that it’s very difficult today to find a Product Design team that doesn’t have, at the very least, a minimally structured library of components and styles.

With this foundation, whether more or less documented, it’s very easy to apply this type of font size increase test using Figma variables. Furthermore, if your design system already has, for example, structured variables for light and dark mode, it means you’re already applying the exact same principles we used to perform this test. So, nothing new.

Working with design systems involves a level of structuring and organization that is also very useful for creating this type of test. There’s a myth that design systems limit creativity. This is not true. Design systems help solve the “bureaucratic” part of design, so we can actually have more time for what matters: in this case, testing accessibility and building more and more products and services that are truly accessible to the greatest number of people.

Example File

It’s always easier to see an example than just read a description of a process. If this is true in many disciplines of knowledge, in design, this premise makes even more sense. Therefore, in this Figma file, freely published and openly available to the community, you’ll find a practical example of the entire testing process described here. Remember that this is just an example. There may be countless ways to perform this type of test within the context of a Figma file.

Be sure to look at this approach with a critical eye. It’s a suggestion for testing font size increases that follows a specific process. Despite this, the approach should be adapted to your team’s specific reality, processes, and level of maturity. Simply copying formulas from other teams without understanding if they make sense in our own context is a sure way to make accessibility efforts disproportionate. Every situation is unique. This approach attempts to simplify accessibility work as much as possible in this specific context. And remember: if something happens, however small, it’s a step forward, not a step backward. And that should be celebrated by everyone on the team.

Reducing Laravel Permission Queries Using Redis (Benchmark Results)

Laravel permissions work great… until your application starts to scale.

If you’re using role/permission checks heavily, you might be hitting your database more often than you think.

In this article, I’ll show you a simple benchmark comparing the default behavior vs a Redis-based approach.

The Problem

In many Laravel applications, permission checks look like this:

$user->can(‘edit-post’);

Looks harmless, right?
But under the hood, this can trigger multiple database queries, especially when:

  • You have many users
  • Complex role/permission structures
  • Frequent authorization checks

At small scale, it’s fine.
At large scale… it adds up quickly.

Benchmark Setup

To test this, I created a simple benchmark comparing:

  • Default Laravel permissions behavior
  • Redis-cached permissions

Benchmark repo: https://github.com/scabarcas17/laravel-permissions-redis-benchmark

The idea was simple:

  • Run multiple permission checks
  • Measure database queries
  • Compare performance

Results

Default Behavior

  • Multiple database queries per permission check
  • Repeated queries for the same permissions
  • Increased load under high traffic

With Redis

  • Permissions cached in Redis
  • Near-zero database queries after first load
  • Much faster response times

Key Insight

The biggest issue is not the first query…
It’s the repeated queries for the same permissions.
By caching permissions in Redis, we eliminate redundant database access.

The Solution

To test this approach in a real scenario, I built a small package: https://packagist.org/packages/scabarcas/laravel-permissions-redis

GitHub repo:
https://github.com/scabarcas17/laravel-permissions-redis

This package adds a Redis layer on top of Laravel permissions, reducing unnecessary queries.

When Does This Matter?

This approach is especially useful if your app has:

  • High traffic
  • Many permission checks per request
  • Complex role/permission structures
  • Performance bottlenecks related to authorization

Final Thoughts

Laravel’s default behavior is solid and works well for most applications.

But if you’re scaling and noticing performance issues, caching permissions can make a real difference.

This benchmark is just a starting point—but it clearly shows the impact of reducing repeated database queries.

Feedback

I’d love to hear your thoughts:

  • Have you experienced performance issues with permissions?
  • How are you handling caching in your apps?

I accidentally gave my AI agent access to my live Payment key. Here’s what I built.

While building an agent last week, I realized something
uncomfortable: my agent had my live Payment API key sitting in its
context window.

One prompt injection attack. One malicious tool response. One
leaked log file. And that key is gone.

I couldn’t find a clean solution, so I built one.

What I built

AgentGuard is a credential proxy for AI agents. Instead of giving
your agent real API keys, you give it a token. When the agent makes
an API call, it goes through the AgentGuard proxy which:

  1. Validates the agent token
  2. Decrypts the real credential server-side
  3. Injects it into the request
  4. Forwards to the target API
  5. Logs the call

The agent never sees the real key. Ever.

The code change is 3 lines

Before:
requests.post(“https://api.stripe.com/v1/charges”,
headers={“Authorization”: “Bearer sk_live_real_key…”})

After:
requests.post(“https://proxy.agent-guard.dev/v1/charges”,
headers={
“X-AgentGuard-Token”: “your_agent_token”,
“X-AgentGuard-Credential”: “your_credential_id”
})

That’s it. Base URL changes, two headers added, everything else
stays the same.

What you also get

  • Full audit log of every API call your agent makes
  • Instant revocation — one click kills an agent’s access
  • Zero-knowledge encryption — keys encrypted in your browser,
    we literally cannot read them

Try it

agent-guard.dev — free to start, no credit card.

Would love feedback from anyone building agents in production.
What am I missing? What would make this actually useful for you?

An AI can now build in 1 hour what used to take a team 1 year. This isn’t vibe coding anymore. This is agentic coding.

Google engineer recently shared something wild.
Claude Code rebuilt in 1 hour what took a team 1 year.
That sparked one big question:
Will this change how we work?
👉 Yes. But not the way you think.
🧠 Vibe coding was just the start
We’ve been talking about vibe coding prompting instead of coding line by line.

That was step one.
Agentic coding is step two.
🤖 Assistant vs Agent
Assistants (Copilot, Claude):
You guide every step
You stay in control
Agents:
You give a goal
They plan, execute, test, fix
They iterate autonomously

👉 Example:
“Refactor this module + add tests”
The agent:
updates files
runs tests
fixes errors
delivers a ready result
You don’t code every step anymore.
You supervise the system.

⚙️ What’s changing right now
This isn’t theory.
Teams are already:
automating workflows
running multiple agents in parallel
reducing manual dev work
👉 The real gain is not just code generation

👉 It’s workflow automation
🧩 Your new role
The best devs are becoming:
Orchestrators
You:
define goals
delegate smartly
validate outputs
Not less technical.
Just more strategic.

⚠️ Reality check
AI won’t replace understanding.
Bad supervision = bad code.
Code can be generated.
Understanding cannot.
🛠️ How to start
automate small tasks first
write clearer prompts (goals > instructions)
always review before shipping

🔥 Final thought
The shift is already here.
The question is no longer:
“Should I use AI?”
But:
“Am I using it the right way?”

Solana Program Authority Security: 5 Upgrade Guardrails That Would Have Saved Step Finance’s $27M

On January 31, 2026, Step Finance lost 261,854 SOL (~$27.3 million) — not to a smart contract bug, but to compromised executive devices and stolen private keys. The attacker gained control of the program upgrade authority, deployed a malicious version, and drained the treasury in minutes.

Step Finance, SolanaFloor, and Remora Markets all shut down permanently in March. No smart contract audit would have prevented this. The vulnerability was operational: a single point of failure in program authority management.

This is a pattern-level problem. Here are five guardrails that make upgrade authority compromise survivable.

The Upgrade Authority Problem

Every upgradeable Solana program has an upgrade_authority — a single pubkey that can deploy new bytecode at any time. By default, this is the deployer’s wallet. If that key is compromised, the attacker owns the program.

┌──────────────────────────────────────────┐
│         DEFAULT SOLANA UPGRADE           │
│                                          │
│  Developer Wallet (hot key)              │
│       │                                  │
│       ▼                                  │
│  solana program deploy program.so        │
│       │                                  │
│       ▼                                  │
│  Program instantly updated               │
│  No delay. No approval. No alert.        │
└──────────────────────────────────────────┘

This is the Step Finance scenario. One compromised laptop → full program control → treasury drained.

Guardrail 1: Multisig Upgrade Authority

The minimum viable defense: transfer upgrade authority to a multisig.

Squads Protocol is the standard on Solana. Set up an M-of-N multisig where no single compromised key can trigger an upgrade:

# Create a Squads multisig (3-of-5)
solana program set-upgrade-authority <PROGRAM_ID> 
  --new-upgrade-authority <SQUADS_VAULT_ADDRESS> 
  --upgrade-authority <CURRENT_KEYPAIR>

Critical configuration choices:

  • Threshold: ≥ 3-of-5 — survives 2 compromised keys
  • Key storage: Hardware wallets (Ledger) — resistant to malware
  • Geographic distribution: ≥ 2 jurisdictions — survives physical seizure
  • Recovery plan: Documented, tested — prevents lockout

What it prevents: A single compromised device can no longer upgrade the program.

What it doesn’t prevent: Social engineering all signers, or a malicious insider. That’s where guardrail 2 comes in.

Guardrail 2: Time-Locked Upgrades

Even with multisig, upgrades should never be instant. A time lock gives the community and monitoring systems time to detect and respond.

use anchor_lang::prelude::*;

#[account]
pub struct UpgradeProposal {
    pub program_id: Pubkey,
    pub buffer_address: Pubkey,
    pub proposer: Pubkey,
    pub proposed_at: i64,
    pub execution_after: i64,
    pub executed: bool,
    pub cancelled: bool,
}

pub const TIME_LOCK_SECONDS: i64 = 172_800; // 48 hours

pub fn propose_upgrade(ctx: Context<ProposeUpgrade>, buffer: Pubkey) -> Result<()> {
    let clock = Clock::get()?;
    let proposal = &mut ctx.accounts.proposal;
    proposal.program_id = ctx.accounts.target_program.key();
    proposal.buffer_address = buffer;
    proposal.proposer = ctx.accounts.proposer.key();
    proposal.proposed_at = clock.unix_timestamp;
    proposal.execution_after = clock.unix_timestamp + TIME_LOCK_SECONDS;
    proposal.executed = false;
    proposal.cancelled = false;
    emit!(UpgradeProposed {
        program_id: proposal.program_id,
        buffer: buffer,
        executable_after: proposal.execution_after,
    });
    Ok(())
}

pub fn execute_upgrade(ctx: Context<ExecuteUpgrade>) -> Result<()> {
    let clock = Clock::get()?;
    let proposal = &ctx.accounts.proposal;
    require!(!proposal.executed, ErrorCode::AlreadyExecuted);
    require!(!proposal.cancelled, ErrorCode::Cancelled);
    require!(
        clock.unix_timestamp >= proposal.execution_after,
        ErrorCode::TimeLockActive
    );
    // Execute via BPF Loader CPI
    Ok(())
}

pub fn cancel_upgrade(ctx: Context<CancelUpgrade>) -> Result<()> {
    let proposal = &mut ctx.accounts.proposal;
    require!(!proposal.executed, ErrorCode::AlreadyExecuted);
    proposal.cancelled = true;
    emit!(UpgradeCancelled { program_id: proposal.program_id });
    Ok(())
}

Key design decisions:

  • 48-hour minimum lock for production programs holding >$1M TVL
  • Cancel is easier than execute — any single signer can cancel
  • On-chain events for every proposal, cancellation, and execution

Guardrail 3: Bytecode Verification Before Execution

A time lock is useless if nobody checks what’s being deployed:

# Build reproducibly
anchor build --verifiable

# Hash the output
sha256sum target/verifiable/program.so

# Community verifies the proposed buffer:
solana program dump <BUFFER_ADDRESS> /tmp/proposed.so
sha256sum /tmp/proposed.so
# Must match the published hash

If the proposed buffer’s hash doesn’t match the published source code’s verifiable build, cancel immediately.

Guardrail 4: On-Chain Upgrade Monitor

Deploy an automated monitor that alerts on any upgrade-related activity:

import { Connection, PublicKey } from "@solana/web3.js";

const BPF_LOADER = new PublicKey(
  "BPFLoaderUpgradeab1e11111111111111111111111"
);

async function monitorUpgrades(connection: Connection) {
  connection.onLogs(BPF_LOADER, (logs) => {
    const isUpgrade = logs.logs.some(l => l.includes("Upgrade"));
    const isSetAuth = logs.logs.some(l => l.includes("SetAuthority"));
    if (isUpgrade || isSetAuth) {
      // Fire alerts to Telegram, Discord, PagerDuty
      sendAlert(`🚨 Program upgrade detected: ${logs.signature}`);
    }
  });
}

If Step Finance had this, the team would have known about the malicious upgrade within seconds — not minutes after the treasury was drained.

Guardrail 5: Conditional Immutability

For mature programs, implement defense asymmetry:

pub fn freeze_program(ctx: Context<FreezeProgram>) -> Result<()> {
    // Any single multisig member can freeze (instant)
    let gov = &mut ctx.accounts.governance;
    gov.is_frozen = true;
    gov.freeze_started_at = Clock::get()?.unix_timestamp;
    Ok(())
}

pub fn unfreeze_program(ctx: Context<UnfreezeProgram>) -> Result<()> {
    let gov = &ctx.accounts.governance;
    let clock = Clock::get()?;
    // Emergency council only (5-of-7), 7-day delay
    require!(
        clock.unix_timestamp >= gov.freeze_started_at + 604_800,
        ErrorCode::UnfreezeDelayActive
    );
    Ok(())
}

Even if 4 of 5 normal multisig members are compromised, a single honest member freezes everything. Unfreezing requires a separate, higher-threshold council and a full week.

What Would Have Saved Step Finance

  • Multisig: Attacker needs 3+ keys, not 1
  • Time lock: 48h window to detect and cancel
  • Bytecode verification: Community spots unknown bytecode
  • Upgrade monitor: Alert within seconds
  • Conditional immutability: Any team member freezes instantly

With all five guardrails, $27.3 million stays in the treasury.

Implementation Priority

  1. Today: Transfer upgrade authority to a Squads multisig (30 min)
  2. This week: Set up upgrade monitor with alerts (2 hours)
  3. This sprint: Implement time-locked upgrades (1-2 days)
  4. This quarter: Add bytecode verification to CI/CD
  5. Post-stabilization: Evaluate conditional immutability

The cost of all five guardrails is measured in hours. The cost of not having them is measured in millions.

Key Takeaways

  1. Upgrade authority is root access. Treat it like cold storage for treasury funds.
  2. Multisig is table stakes, not the finish line. Without time locks and monitoring, coordinated attacks still succeed silently.
  3. Defense asymmetry saves you. Make freezing easy and upgrading hard.
  4. Monitor the BPF Loader. It’s the single chokepoint for all Solana program upgrades.
  5. $27.3M was lost to operational security, not code. Security is a stack — code, operations, and governance all need coverage.

This is part of the DeFi Security Best Practices series. The Step Finance incident is a wake-up call for every Solana team running upgradeable programs.