What’s in ShipKit’s $249 Next.js starter

I have started a lot of Next.js apps. Every single one burns the same first two months: auth, payments, database setup, CMS, email templates, UI components. Different project, same plumbing.

ShipKit is my attempt to stop repeating that.

What it is

ShipKit is a Next.js 15 starter kit with production-ready infrastructure already wired up. You clone it, fill in your env vars, and you are writing product code on day two.

$249 one-time. Lifetime updates. No subscription.

What is included

  • Auth: Better Auth with OAuth, magic links, and RBAC. Session handling is done.
  • Payments: LemonSqueezy pre-wired with webhook handling. One-time and subscription support.
  • Database: Postgres + Drizzle ORM. Schema already migrated.
  • CMS: Payload CMS with an admin panel. MDX for blog and docs pages.
  • Email: Resend integration with templates ready to customize.
  • UI: 100+ shadcn/ui components. No more hunting the docs.
  • AI: OpenAI and Anthropic hooks, v0.dev integration, and Cursor rules for AI-assisted development.
  • Deploy: One-click Vercel deploy.

Free tier

There is also Shipkit Bones – the free version. You get Next.js 15, Better Auth, TypeScript setup, and basic components. No card required. Good starting point if you want to see how it is structured before committing.

Why $249

Less than a day of consulting. If you are building something new and would otherwise spend weeks on setup, it pays for itself before you ship.

Full details at shipkit.io/pricing.

Pygame Snake, Pt. 1

Pygame is a module designed to allow us to make 2D games using Python. I am just learning pygame myself. I’ve always loved the game Snake, and I find that it makes a good learning project, too.

When using pygame, we will draw each frame of our animation to an offscreen buffer. That is basically like an invisible canvas that exists in memory. This allows us to draw the scene piece by piece, and then when its done copy the whole buffer to the onscreen canvas.

Pygame will also handle the event loop. We can specify how many frames per second we want our game to run at, and pygame will make sure the event loop runs no faster than that. Each time through the loop, we will draw one frame of animation.

We will also respond to events that pygame gives us. In particular, pygame will send us a KEYDOWN event whenever a key is pressed. Also, if the user closes the pygame window, it will send us a QUIT event, and the game should stop.

Here is a minimal pygame setup. Modified code from https://www.pygame.org/docs/

import pygame

# initialize pygame
pygame.init()

# create 1000x1000 pixel pygame window
screen = pygame.display.set_mode((1000, 1000))

# clock used to set FPS
clock = pygame.time.Clock()

# game runs as long as this is True
running = True

while running:
    # poll for events
    for event in pygame.event.get():
        # pygame.QUIT = user closed window
        if event.type == pygame.QUIT:
            running = False

    # fill buffer with white
    screen.fill("white")

    # copy buffer to screen
    pygame.display.flip()

    # limits FPS
    clock.tick(20)

pygame.quit()

The only thing about this code I don’t like is that it requires the running variable. I wanted to use an infinite loop with break, but I can’t do that since we have to iterate through potentially multiple events using an inner (for) loop, and Python has no way to break out of nested loops.

OK, so that’s not a bad start, except that it is not exactly the most interesting “animation”. Let’s add a moving square.

To have an animated square, we’ll need a new variable to keep track of its location. If we weren’t using pygame, I might create a simple little class for this (to store x and y coordinates). But pygame already has such a class built-in: Vector2.

So let’s add a new variable called dot and create a Vector2 value for it to hold:

dot = pygame.Vector2(500, 500)

Add that code just above the while loop.

Now inside the loop increment dot‘s x value, and then draw it at its new position. To draw a square, we have to create a Rect object. To do this, we will pass two pairs of values: the (x, y) coordinates of its upper-left corner, and its (width, height) values. The dot variable itself will function as the (x, y) coordinates of the square’s upper-left corner.

    dot.x += 10
    square = pygame.Rect(dot, (50, 50))
    screen.fill("black", square)

Place this code right under screen.fill("white"). That command paints the entire buffer white, in anticipation of drawing a new frame. The new three lines of code draw a 50×50 black square at the location indicated by the dot variable.

Full code now:

import pygame

# pygame setup
pygame.init()
screen = pygame.display.set_mode((1000, 1000))
clock = pygame.time.Clock()
running = True

dot = pygame.Vector2(500, 500)

while running:
    # poll for events
    for event in pygame.event.get():
        # pygame.QUIT = user closed window
        if event.type == pygame.QUIT:
            running = False

    # fill buffer with white
    screen.fill("white")

    dot.x += 10
    square = pygame.Rect(dot, (50, 50))
    screen.fill("black", square)

    # copy buffer to screen
    pygame.display.flip()

    # limits FPS
    clock.tick(20)

pygame.quit()

If you have a little experience with Python, try this challenge: Add some code to check if dot.x > 1000, and if so, set it to 0. This should cause the dot to reappear on the left once it goes off the right edge.

Incremental Highlighting for Scala

If an error is detected in a file but no one sees it, is it really highlighted?

This is a tale about why highlighting only what’s visible is both the strangest and the most reasonable thing to do – and how you can benefit from it.

(Enable Settings | Scala | Editor | Highlighting Mode | Incremental to speed up highlighting and reduce resource consumption.)

Compiling highlights

Both IDEs and compilers analyze code, but they do so differently. Compilers, for example:

  • Translate source code into executable code.
  • Process every source file.
  • Treat errors as obstacles to the main goal.

Incremental “compilers” can prevent recompilation, but they also eventually compile everything.

IDEs, on the other hand:

  • Analyze code for the purpose of understanding and editing.
  • Highlight errors in the context of the current task.
  • Process only what’s necessary and can tolerate errors.

If one source file doesn’t compile, the compiler won’t process another file that depends on it. An IDE, however, can highlight code that depends on a file with errors, and there might be no errors in the highlighted code itself.

So far, so good. However, even though IDE algorithms do not depend on files, the user interface does. Just as you run scalac Foo.scala, you open Foo.scala in an editor tab. The IDE processes the entire file, even if it’s very large and most of it is not visible or relevant to the task at hand. Because of this, highlighting depends on where you place classes and methods. If you put class Foo and class Bar in separate files, Foo is not highlighted when you edit Bar; but if both classes are in the same file, viewing or editing one class also highlights the other.

For compilers, a UI that processes files in their entirety is natural, because that’s how they work. IDEs, however, do not have this limitation. In contrast to a compiler, IntelliJ IDEA can re-highlight only part of a file without recomputing everything. Nevertheless, there is still an initial highlighting, and not every modification can be localized. The IDE keeps the entire file highlighted, similar to how a compiler keeps the entire project compiled – but it doesn’t have to. What if we could do better?

The file is a lie

Consider the following:

 
Looks like a code snippet, huh? But no, it’s a “file”:

We take this UI pattern for granted. However, if you think about it, such a “file” is no different from a “file” in the Project view: the title shows the name, just like nodes do; the scrollbar provides navigation, like the tree does; the error stripe can show marks, and the tree can show marks as well. We don’t really see the entire file, only a code snippet.

Thus, the question of whether to highlight the entire file makes no sense – we cannot highlight elements that are not visible, just as we cannot highlight elements in files that are not open. (And “opening a directory” in the Project view doesn’t change that.) Note that we don’t say that compilers “highlight” files; only IDEs do. We can only highlight what’s visible. Beyond that, analysis is involved, but not highlighting.

What you see is what you get

Now, how much should an IDE analyze? Even though it’s not possible to highlight code beyond the visible area, it is possible to draw marks on the error stripe, underline nodes in the Project view, or display errors in the Problems tool window. IntelliJ IDEA does all this. Yet, it doesn’t analyze code in files that are not open. Why? Isn’t more analysis better? (In principle, IDEs could process every single file, just as compilers do.) First, there are diminishing returns. Second, it is not without cost.

Source code is not a collection of random symbols. Code has high cohesion and low coupling. The greater the distance between two points in code, the less interconnected they are. That’s why highlighting code in the immediate context is more important, while highlighting distant code is less important. Moreover, the latter can only distract from the task at hand. If you’re editing a method, you don’t want to be distracted by thousands of errors in other files while you’re still in the process of editing.

At the same time, analysis consumes CPU, RAM, and battery. It can make the IDE or even the entire OS less responsive. In this highlighting economy, you want to maximize profit by balancing revenue and costs. But what is the proper scope to hit this Goldilocks zone, and is “the file” the answer?

To begin with, “file” is not a language entity (and compilation units don’t have to be files). It’s actually an implementation detail of how we store source code. In principle, we could store code as an abstract syntax tree (AST) in a database; then there would be no files. Metrics such as cyclomatic complexity are about packages, classes, and methods – not files. It could be that the distance in code is more important than the file boundary.

Another issue with files is that they can be arbitrarily large. In principle, you could put all project classes into a single file. This would not affect the bytecode, but it would make source code more costly to highlight. (Although there is some benefit to detecting errors in the entire file, this doesn’t guarantee the absence of errors in general – for that, you need to compile the project anyway.)

Now consider the visible area: that’s where we can actually highlight code. It is the focal point of attention and the location of the cursor, where feedback is immediate rather than indirect. The benefits of analyzing code in the visible area are the greatest. At the same time, the visible area is naturally bounded by display resolution, human vision, and human comprehension, and the cost of analysis does not depend on file size. It could be that “visible area” is a better choice for the scope than “file.” (We can also extend this logic to folding and skip parts that are folded and not actually visible.)

???

Fortunately, this is not just a theory – you can already try it in practice! Incremental highlighting can now be enabled in Settings | Scala | Editor | Highlighting Mode:

The setting is only for Scala and is available only when you’re using the built-in highlighting, not the compiler. The setting is per-project, so if you want to use this mode in multiple projects, you need to enable it in each one. (The capability has actually been available since 2024.3, but we recommend using the newest version to benefit from more improvements.)

When incremental highlighting is enabled, only code in the visible area is highlighted (excluding folded parts). The algorithm can handle multiple editors, including split editors and the diff viewer.

The mode is called “incremental” rather than “partial” because, even though only part of a file is highlighted, the parts that have already been highlighted remain highlighted. If you scroll through the code back and forth, computations are cached and not duplicated. Furthermore, the mode works well with incremental updates on modifications; editing code in a local scope preserves the already highlighted ranges, even outside the visible area.

To make scrolling smoother, the algorithm pre-highlights 15 lines before and after the visible area. (It’s possible to customize this range by adjusting scala.incremental.highlighting.lookaround in the registry.) However, if you navigate to a definition in a file directly, you may observe on-the-fly highlighting, similar to when you open a file for the first time or navigate to definitions in other files.

The error stripe (marks on the scrollbar) is filtered to include only data within the visible range. Next/Previous Highlighted Error navigates between known – usually visible – errors. Several inspections, such as Unused Declaration and Unused Import, are not active. However, imports are normally folded anyway, and Optimize Imports does work. Many of these restrictions are accidental, intended to keep the implementation simple, and can be improved in future updates. (Select Nightly in Settings | Languages | Scala | Updates to get updates faster.)

When the incremental highlighting mode is enabled, you can double-press the Esc key to analyze the entire file on demand, using all inspections.

Profit

The benefits of using incremental highlighting include:

  1. Better responsiveness
  2. Optimized CPU usage
  3. Efficient memory usage
  4. Cooler system temperatures
  5. Quieter operation
  6. Longer battery life

This applies to both initial highlighting and re-highlighting – both when viewing and editing code. In many cases, highlighting time can be reduced by up to 510 times, though the exact amount depends on the file size and code complexity. The benefits of incremental highlighting are especially noticeable for Scala code, which can be rather complex and difficult to analyze.

The benefits also depend on hardware; if you have a powerful desktop machine with water cooling, the effect might be less noticeable, but if you use an ultrabook, the difference is more significant.

Feedback welcome

Both the idea and the implementation are works in progress. Many parts can be refined and improved further. (See SCL-23216 for more technical details.)

We’d love for you to try the feature and share your feedback. Please report any issues through YouTrack. If you have any questions, feel free to ask us on Discord.

Happy developing!

The Scala team at JetBrains

How to Triage a Phishing Alert Faster — Without Rebuilding the Process Every Time

Most phishing alerts do not take long because they are difficult. They take long because the workflow is inconsistent.

You get the alert.

A user reported a suspicious email. Maybe your mail gateway flagged it. Maybe your SIEM created a case. Either way, you now have the same question every SOC analyst has asked a hundred times:

Is this real, or is this noise?

The problem is not that phishing triage is impossible. The problem is that most teams still do it in a fragmented way.

One analyst checks the headers first. Another starts with the sender domain. Someone else jumps straight to the links. Then comes the write-up, the ticket note, the escalation decision, and the inevitable feeling that you may have missed something small but important.

That is where the time goes. Not in any one check by itself. In the lack of a repeatable process.

Over time, I found that the fastest way to triage phishing was not to become “faster” at each individual step. It was to stop rebuilding the workflow from scratch every time.

This is the process I use now to move from a suspicious email to a structured triage note in minutes instead of dragging the same alert through 20 different micro-decisions.

Why phishing triage often takes longer than it should

Most analysts are doing several things at once when a phishing alert lands: checking sender and reply-to details, reviewing SPF, DKIM, and DMARC, inspecting links and domains, deciding whether the message looks like credential harvesting, malware delivery, or simple spam, and documenting findings for a ticket or escalation.

None of those steps are unreasonable. The slowdown comes from doing them in a different order every time, with different depth, and often with different output formats depending on who is on shift.

First problem: time loss. You keep re-parsing the same raw material manually — raw headers, sender path, suspicious domains, authentication results, URLs and context.

Second problem: inconsistency. Two analysts can look at the same phishing email and produce two very different summaries, severities, and next actions. That is not just a people problem. It is a workflow problem. A structured first-pass triage fixes both.

The workflow I use now

Step 1 — Get the full raw email

The first thing I want is not just the visible message body. I want the full raw email: headers, sender path, authentication results, and the message body.

In Gmail, that means opening the message and using Show original. In Outlook or other mail clients, there is usually a similar option to view the full source.

Why this matters: if you only look at the visible email, you miss some of the most useful phishing indicators — Reply-To mismatches, Return-Path differences, SPF / DKIM / DMARC results, sending infrastructure clues, and message routing signals.

The body tells you what the attacker wants you to believe. The raw email tells you how the message actually traveled. You need both.

Step 2 — Run a structured first-pass analysis

Instead of manually pulling the email apart every time, I paste the raw message into a phishing triage workflow that handles the first-pass parsing for me.

I use SOC.Workflows, which is a browser-based tool I built for exactly this kind of structured analyst workflow. The important part is not the brand. The important part is the sequence.

Paste the raw email into a structured analyzer, and let it do the first-pass breakdown:

  • sender and reply-to mismatch
  • SPF / DKIM / DMARC results
  • suspicious domains or lookalikes
  • shortened or risky URLs
  • urgency language and social engineering cues
  • severity and confidence
  • recommended next steps

That instantly turns a wall of raw email data into something you can actually reason about. And because the pasted email content is processed in the browser and not sent to a server, you can do that first-pass triage without shipping the raw message off somewhere else.

Step 3 — Review the signals, not just the branding

You stop asking: “Does this look polished?” and start asking: “Do the technical and contextual signals line up?”

A polished email is not trustworthy because it is polished. A passing SPF result is not trustworthy because it passed SPF. A brand logo is not proof of legitimacy. Phishing today often looks clean enough to pass a visual glance. What matters is whether the sender path, destination, and context actually make sense together.

Step 4 — Use AI only after the structure exists

Many people paste the raw email directly into ChatGPT or Claude and ask: “Is this phishing?” That can work sometimes, but it is inconsistent because the input is inconsistent. Raw data is noisy. Structured input is much more useful.

The better approach: do the first-pass parsing first, organize the evidence, then send the structured prompt into AI for deeper reasoning. Once the key signals are already extracted, AI becomes much more useful for validating the assessment, drafting a user advisory, suggesting containment steps, and writing a clean incident note.

AI works much better when it receives labeled evidence, not a wall of raw text.

Step 5 — Copy the incident note and move on

Once the findings are structured, copy the incident note into the SIEM ticket, ServiceNow, Jira, Slack, or whatever case workflow you use. A structured note fixes the write-up problem and makes handoff easier — every investigation looks more consistent across the team.

Why this matters beyond speed

Consistency. When the same type of alert gets triaged the same way every time, notes are cleaner, severity is easier to defend, escalations are more predictable, and handoffs are smoother.

Junior analyst support. A structured workflow helps less experienced analysts know what to check, in what order, and what actually matters. That reduces hesitation and helps them escalate with more confidence.

Better use of AI. AI is most useful after the evidence has already been organized — second-pass reasoning, clearer communication, faster documentation. Not as a substitute for the first-pass thinking.

What I would recommend to any SOC team

  1. Standardize the first pass — do not let every analyst invent the workflow from scratch.
  2. Work from the full raw email — do not rely only on the visible message.
  3. Structure the evidence before using AI — do not ask AI to do the organizing work if you can parse and label the signals first.

If you want to try this workflow

The phishing analyzer is at socworkflows.com/phishing — free, browser-based, no account needed.

If phishing is only one part of your queue, there are also analyzers for alert triage, VPC flow logs, and credential dumping — all built around the same idea: client-side triage first, AI reasoning second.

Final thought

Most phishing alerts do not become slow because the analysis is too complex. They become slow because the process is inconsistent. Fix the workflow, fix the speed. Structure the first pass properly, and you make everything after that easier — investigation, escalation, documentation, and team consistency.

That is where the real time savings come from.

Busy Plugin Developers Newsletter – Q1 2026

Your quarterly dose of plugin dev news, tools, and tips from JetBrains.

🧩 Marketplace Updates

Updated Approval Guidelines: New Technical Requirement

We’ve added a new clause to the Plugin Approval Criteria, under Section 2.2:

c. The Plugin must not modify, hide, intercept, or otherwise interfere with the functionality of any JetBrains Product in a way that disrupts, degrades, or circumvents its intended behavior, including any mechanisms related to licensing, subscriptions, trials, or product upgrade flows.

This requirement formalizes what has always been an expectation: plugins on JetBrains Marketplace must not tamper with core product behavior. Any plugin found to interfere with licensing, subscription validation, trial flows, or upgrade mechanisms (regardless of intent) will not meet our approval criteria.

If your plugin interacts with IDE internals in these areas, please review your implementation and make sure it complies before your next submission.
See the docs →

Build Plugins Page Got a New Look

The page for those just entering the world of plugin development has been redesigned. It now provides clearer steps covering the entire process—from development to publishing—along with key resources and essential information to simplify plugin creation and management.
Explore the page →

🔧 Plugin Development Tooling Updates

IntelliJ Platform Plugin Template 2.5.0

Repository that simplifies the initial stages of plugin development for IntelliJ-based IDEs. 

  • Updated org.jetbrains.intellij.platform to version 2.14.0.
  • Simplified project configuration by migrating IntelliJ Platform repository setup to settings.gradle.kts and inlining key properties and dependencies.
  • Cleaned up build scripts by removing redundant configurations and deprecated dependencies.
  • Streamlined the template by removing Qodana, Kover, and related configuration, dependencies, and workflow steps.

View Changelog →

IntelliJ Plugin Verifier 1.402

Tool that checks binary compatibility between IntelliJ-based IDE builds and plugins. 

  • Improved stability by optimizing graph cycle calculations and fixing macOS module naming issues.
  • Enhanced compatibility with paginated plugin ID retrieval from the Marketplace API.
  • Updated Kotlin and key dependencies.

View Changelog →

IntelliJ Platform Gradle Plugin 2.14.0

Plugin that helps configure your environment for building, testing, verifying, and publishing plugins for IntelliJ-based IDEs. 

  • Improved defaults for plugin verification, signing, and publishing configuration.
  • Added new helpers for selecting target IDEs and enhanced Gradle task configuration.
  • Simplified project setup with better defaults for Java toolchains and module handling.
  • Updated a minimum supported IntelliJ Platform version and fixed compatibility issues.

View Changelog →

💡 Tip of the Quarter

Enable Internal Mode to Access Powerful Development Tools

Enable Internal Mode to access UI Inspector (see component creation code), PSI Viewer, Registry settings, and more. Use “Skip Window Deactivation Events” when debugging to prevent ProcessCanceledException during breakpoints.

📚 Resources & Learning

 📖 Blog

Wayland by Default in 2026.1 EAP
IntelliJ-based IDEs will run natively on Wayland by default in supported Linux environments. See what’s changed, what to expect, and how this transition improves stability and performance.
Read →

Editor Improvements: Smooth Caret Animation and New Selection Behavior
More precise selections, smoother caret movement, and a refreshed visual style bring a more comfortable and intuitive coding experience. See what’s changed in the editor.
Read →

The Experience Gap: How Developers’ Priorities Shift as They Grow
A closer look at how plugin developers grow with the platform—and how their tools, habits, and expectations change along the way.
Read →

UI Freezes and the Dangers of Non-Cancellable Read Actions
Not all UI freezes come from the EDT. See how background read actions can cause issues and how to fix them.
Read →

🛠 IntelliJ Platform Plugin SDK

Split Mode (Remote Development)
Learn how Split Mode changes where plugin code runs, how frontend and backend communicate, and what it takes to build plugins that work smoothly in remote development environments.
Read →

Top-Level Notifications
Notification balloons help surface relevant information without interrupting the developer’s flow. Explore how to configure notification groups, add actions, and choose the right notification type.
Read →

Short Video: Notification Balloons in IntelliJ SDK

⭐ Community Spotlight

Livestream Recording: UI Freezes in JetBrains IDE Plugins and How to Avoid Them

In a recent livestream, Product Manager for the IntelliJ Platform Yuriy Artamonov, together with Developer Advocate Patrick Scheibe, explored one of the most common and frustrating issues in plugin development—UI freezes. The session covered why freezes happen (even outside the event dispatch thread), how read actions can block the UI, and what patterns to avoid when working with background tasks and concurrency.

Plugin Model v2 Explained: A New Way for JetBrains IDEs

In this short video, Róbert Novotný walks through Plugin Model v2 for the IntelliJ Platform, covering its modular architecture, improved reloading, and clearer dependency management.

💌 Until next time — happy coding!
JetBrains Marketplace team

Kotlin Professional Certificate by JetBrains – Now on LinkedIn Learning

JetBrains has partnered with LinkedIn Learning to offer the Kotlin Professional Certificate. This is a structured learning path that covers the full scope of modern software development – from Kotlin essentials all the way to building full-stack, multiplatform applications for mobile, desktop, web, and backend environments.

Start Learning

Who it’s for

This certification is designed for developers with basic programming knowledge who want to pick up Kotlin and explore multiplatform development. Whether you’re coming from Java, Python, C, or another language, this program will give you insight into what Kotlin can do across the full development landscape. If you are a mobile developer who wants to stop writing things twice, a backend developer curious about Kotlin’s server-side capabilities, or a generalist who wants to ship on multiple platforms without fracturing your codebase, this is for you.

What the learning path covers

The certification includes four courses structured to guide you along an intuitive path:

Kotlin Essential Training: Functions, Collections, and I/O starts with the fundamentals – how Kotlin handles functions, how its collection APIs work, and how to interact with files and I/O, as well as core Kotlin syntax. If you are coming from Java, a lot of this will feel familiar but cleaner. If you are coming from elsewhere, this is where Kotlin’s expressiveness starts to click.

Kotlin Essential Training: Object-Oriented and Async Code goes deeper into OOP principles and asynchronous programming in Kotlin. The course introduces distinctive Kotlin features such as sealed classes, data classes, and extension functions, while showing how coroutines make async programming more readable. This course builds the foundation you need before getting started with multiplatform.

Kotlin Multiplatform Development teaches you how to write shared business logic once and deploy it across multiple platforms – mobile (Android and iOS), web, desktop, and backend. You’ll learn about the architecture that makes this possible, as well as how to structure a KMP project, what can and can’t be shared, and how to make the boundaries between platforms work for you rather than against you.

Exploring Ktor With Kotlin and Compose Multiplatform brings it all together. Ktor is JetBrains’ own framework for building asynchronous servers and clients in Kotlin; Compose Multiplatform extends Jetpack Compose to desktop and web. Together, they let you build full-stack applications with a genuinely unified approach. This course is the practical capstone – you leave with experience actually building something, not just learning concepts.

Access

The Kotlin Professional Certificate is available on LinkedIn Learning through a LinkedIn Premium subscription, which includes a one-month free trial for eligible users. Many organizations and universities also provide LinkedIn Learning access to their employees and students, and some public libraries offer free access with a library card as well, so it’s worth checking with your employer or institution.

Start Learning

The certificate

In total, the certification takes about 11 hours spread across the four courses. You’ll work in IntelliJ IDEA, the industry’s leading IDE, gaining practical knowledge that’s essential for your career. By the end, you’ll be able to build complete multiplatform applications from a shared codebase. 

Complete all four courses and pass the final exam to earn your Kotlin Professional Certificate by JetBrains. You’ll be able to download it, share it, and add it directly to your LinkedIn profile to showcase your Kotlin and multiplatform development skills to recruiters and hiring managers.

Let us know how you like the courses, and be sure to share your certificate and tag us on LinkedIn.

We’re excited to see what you build with Kotlin!

Session Timeouts: The Overlooked Accessibility Barrier In Authentication Design

For web professionals, session management is a balancing act between user experience, cybersecurity, and resource usage. For people with disabilities, it is more than that — it is a barrier to buying digital tickets, scrolling on social media, or applying for a loan online. Session timeout accessibility can be the difference between a bad day and a good day for those with disabilities.

For many, getting halfway through an important form only to be unceremoniously kicked back to the login screen is a common experience. Such incidents can lead to exasperation and even abandonment of the website entirely. With some backend work, web professionals can ensure no one has to experience this frustration.

Why Session Timeouts Disproportionately Affect Users With Disabilities

A considerable portion of the global population has cognitive, motor, or vision impairments. Worldwide, around 1.3 billion people have significant disabilities. Whether they possess motor, cognitive, or visual impairments, their disabilities affect their ability to interact with technology easily. They can all be disproportionately affected by session timeouts, making session timeout accessibility a critical issue.

Session timeouts are inaccessible for a large percentage of the population. An estimated 20% of people are neurodivergent, meaning timeout barriers don’t just affect a small subset of users — they impact a substantial portion of any website’s audience. As a result, some users may look inactive when they are not. Strict timeouts create undue pressure.

Motor Impairments and Slower Input Speeds

For instance, someone with cerebral palsy tries to purchase tickets online for an upcoming concert. Due to coordination difficulties and muscle stiffness, they may enter their information more slowly than a non-disabled person would. They select the date, choose their seats, and fill out personal information. Before they can enter their credit card details, a timeout pop-up appears. They have been logged out due to “inactivity” and must restart the entire process.

This situation is not entirely hypothetical. Matthew Kayne is a disability rights advocate, broadcaster, and contributor to The European magazine. He describes the effort required to navigate websites as someone with cerebral palsy. He explains how the user interface is often poorly designed for adaptive devices, and he worries his equipment won’t respond correctly. After carefully navigating each page, he is suddenly logged out. In a moment, one timed form can erase hours of work, and it’s not just a matter of inconvenience. A single failed attempt can delay support or cause him to miss appointments.

Motor impairments can slow input speed, making it appear the user is not at their computer. As such, people who experience stiffness, hand tremors, coordination challenges, involuntary movements, or muscle weakness are disproportionately affected by session timeouts. According to the DWP Accessibility Manual, it can take multiple attempts for adaptive technology to register input, slowing users down considerably. Even if they receive a warning, they may not be able to act fast enough to prove they are still active.

Cognitive Impairments and Processing Time

Session timeouts can also create accessibility barriers for those with various types of cognitive differences. Strict timeouts can create undue pressure that assumes everyone processes information at the same speed. Users may appear inactive when they are actually reading, thinking, or processing.

Cognitive differences encompass a wide range of experiences, including neurodivergences like autism and ADHD, developmental disabilities like Down syndrome, and learning disabilities like dyslexia. Many people are born with cognitive differences. In fact, an estimated 20% of people are neurodivergent, making up a large portion of any website’s audience. Others acquire cognitive disabilities later in life through traumatic brain injury or conditions like dementia.

People with cognitive disabilities often need more time to complete online tasks — not because of any deficit, but because they process information differently. Design choices that work well for neurotypical users can create unnecessary obstacles for people with ADHD, dyslexia, autism, or memory-related conditions.

Invisible session timeouts are particularly problematic for people who experience memory loss, language processing differences, or time blindness. For example, neurodivergent technology leader Kate Carruthers says ADHD has affected her perception of time. She has time blindness and can’t reliably track how much time has passed, making estimates unhelpful.

When websites depend on users estimating remaining time before a session expires, they quietly exclude people — not just those with formal ADHD diagnoses, but anyone who experiences time differently or processes information at a different pace.

Vision Impairments and Screen Reader Navigation Overhead

Since blind or low-vision users cannot visually scan a page to find what they need, they must listen to links, headings, and form fields, which is inherently more time-consuming. More than 43 million people worldwide are affected by blindness, while 295 million have moderate to severe vision impairment, which makes this a significant accessibility concern for any global-facing website.

As a result, these users’ sessions may expire even if they are active. Live timers and 30-second warnings do little to help, as they are not built with screen readers in mind.

Bogdan Cerovac, a web developer passionate about digital accessibility, experienced this firsthand. The countdown timer informed him how long he had left before being logged out due to inactivity. By all accounts, it worked fine. However, he describes the screen reader experience as horrible, as it notified him of the remaining time every single second. He couldn’t navigate the page because he was spammed by constant status messages.

Common Timeout Patterns That Fail Accessibility Requirements

According to the National Institute of Standards and Technology, session management is preferable to continually preserving credentials, which would incentivize users to create authentication workarounds that could threaten security. However, several common timeout patterns fail to meet modern standards for session timeout accessibility.

Silent Timeouts and Insufficient Warnings

Many websites either provide no warning before logging users out, or they display a brief, seconds-long pop-up that appears too late to be actionable. For users who navigate via screen reader, these warnings may not be announced in time. For those with motor impairments, a 30-second countdown may not provide enough time to respond.

Let’s consider the Consular Electronic Application Center’s DS-260 page, which is used to apply for or renew U.S. nonimmigrant visas. If an application is idle for around 20 minutes, it will log the user off without warning. The FAQ page only provides an approximate time estimate. Someone’s work only saves when they complete the page, so they may lose significant progress.

Nonextendable Sessions

An abrupt “session expired” message is frustrating even for individuals without disabilities. If there is no option to continue, users are forced to log back in and restart their work, wasting time and energy.

Form Data Loss on Expiration

Unless the website automatically saves progress, visitors will lose everything when the session expires. For someone with disabilities, this does not simply waste time. It can make their day immeasurably harder. Imagine spending an hour on a service request, job application, or purchase order only for all progress to be completely erased with little to no warning.

Design Patterns That Balance Security and Accessibility

Inconsistent timeout periods and a lack of warnings lead to the sudden, unexpected loss of all unsaved work. For long, complex forms, like the DS-260, a poor user experience is extremely frustrating. In comparison, the United Kingdom’s application for pension credit is highly accessible. It warns users at least two minutes in advance and allows them to extend the session. It meets level AA of the WCAG 2.2 success criteria, indicating its accessibility.

People with disabilities are disproportionately affected by the unintended consequences of poor session management. Thankfully, session timeouts’ inaccessibility is not a matter of fact. With a few small changes, web professionals can significantly improve their website’s accessibility.

Advance Warning Systems and Extend Functionality

Websites should clearly state the time limit’s existence and duration before the session starts. For instance, if someone is filling out a bank form, the first page should exist solely to inform them that it has a 60-minute time limit. A live counter that updates regularly can help them track how much time remains. Also, users should be told whether they can adjust the session timeout length.

Activity-Based vs. Absolute Timeouts

An activity-based timeout logs users out due to inactivity, while an absolute timeout logs them out regardless of activity. For an office, a 24-hour absolute timer might make sense, since workers only need to log in when they get to work. As long as users know when their session will expire, the latter is more accessible than the former.

Auto-Save and Progress Preservation

Cookies, localStorage, and sessionStorage are temporary, client-side storage mechanisms that allow web applications to store data for the duration of a single browser session. They are powerful, lightweight tools. Web developers can use them to automatically save users’ progress at frequent intervals, ensuring data is restored upon reauthentication.

This way, even if someone’s session expires by accident, they are not penalized. Once they log back in, they can finish filling out their credit card details or pick up where they left off with an online form.

Testing and WCAG Compliance Considerations

The Web Content Accessibility Guidelines (WCAG) is a collection of internationally accepted internet accessibility standards published by the W3C. It acts as the arbiter of session timeout accessibility. Web developers should pay special attention to Guideline 2.9.2, which outlines best practices for adequate time.

The timeout adjustable mechanism should extend the time limit before the session expires or allow it to be turned off completely. For the former option, a dialog box should appear asking users if they need more time, allowing them to continue with one click. The WC3 notes that exceptions exist.

For example, when a website conducts a live ticket sale, users can only hold tickets in their carts for 10 minutes to give others a chance to purchase limited inventory. Alternatively, session timeouts may be necessary on shared computers. If librarians allowed everyone to stay logged in instead of automatically signing them out overnight, they would risk security issues.

Some processes should not have time limits at all. When browsing social media, reading a news article, or searching for items on an e-commerce site, there is no reason a session should expire within an arbitrary time frame. Meanwhile, in a timed exam, it may be necessary. However, in this case, administrators can extend time limits for students with disabilities.

When web developers make session management accessible, they are not catering to a small group. Pew Research Center data shows 62% of adults with disabilities own a computer. 72% have high-speed home internet. These figures do not differ statistically from the percentage of non-disabled adults who say the same.

Overcoming the Session Timeout Accessibility Barrier

The WCAG provides additional resources that web developers can review to understand session management accessibility better:

  • WCAG SC 2.2.1 Timing Adjustable
  • WCAG SC 2.2.5 Re-authenticating
  • WCAG SC 2.2.6 Timeouts

In addition to following these guidelines, there is a wealth of information from leading educational institutions, authorities on open web technologies, and government agencies. They provide a great starting place for those with intermediate web development knowledge.

Web professionals should consider the following resources to learn more about tools and techniques they can use to make session management more accessible:

  • Harvard University’s Session Extension Technique
  • DWP Accessibility Manual: How to test session timeouts
  • Window: sessionStorage property

Session timeout accessibility is not only an industry best practice but an ethical web development standard.

Those who prioritize it will appeal to a wider audience, improve usability, and attract more website visitors and longer sessions.

The main takeaway is that a website with inaccessible session timeouts sends a clear message that it doesn’t value the user’s time or effort, a problem that creates significant barriers for people with disabilities. However, this is a solvable issue. With a few simple changes, such as providing session extension warnings and auto-saving progress, web developers can build a more considerate, accessible, and respectful internet for everyone.

Further Reading On SmashingMag

  • “What Does It Really Mean For A Site To Be Keyboard Navigable”, Eleanor Hecks
  • “Designing For Neurodiversity”, Vitaly Friedman
  • “What I Wish Someone Told Me When I Was Getting Into ARIA”, Eric Bailey
  • “A Designer’s Accessibility Advocacy Toolkit”, Yichan Wang

The 6 Git Hooks I Copy Into Every New Repo

  • Every new repo I start gets the same six git hooks copied in before the first commit lands

  • Pre-commit lint and type-check runs catch noisy mistakes before they hit CI, saving 30 to 60 seconds per push

  • A commit-msg regex enforces conventional commits so my changelog auto-generates without me thinking about it

  • Post-merge triggers a script that reinstalls dependencies when package.json changes, ending the “why is this broken” Monday ritual

  • The whole setup lives in a single hooks/ folder that I clone or curl into new projects in under 30 seconds

I have 15 active repos under RAXXO Studios and one more for my day job. Every single one has the same git hooks folder. Not because I am religious about automation, but because I got tired of the same six mistakes repeating across every project: broken lockfiles, malformed commit messages, missing type checks, stale dependencies, accidentally committed env files, and the classic push-to-main-on-Friday reflex.

Git hooks solve all of them for free. They run locally, cost nothing, and the setup takes 30 seconds per new project once you have the folder copied somewhere you can reach.

Here are the six hooks I copy into every new repo, what they actually do, and why I bothered to write each one.

Why Git Hooks Beat CI for These Checks

I run CI on every project. GitHub Actions, Vercel preview deployments, the usual stack. CI is not the right place for the checks I am about to describe.

The reason is feedback speed. CI takes 90 seconds to 3 minutes to report a failure. If I push a commit with a type error, I find out three minutes later, context-switch back, fix it, and push again. That is two 3-minute cycles of attention spent on a mistake I could have caught in 2 seconds locally.

Pre-commit hooks run before the commit exists. You get told about the mistake before it becomes a git object. Pre-push hooks run before the push leaves your machine. The feedback loop is tight enough that you actually fix things instead of ignoring the CI email.

The other reason is trust. Hooks that run on every developer’s machine mean no one pushes a commit they have not personally verified. On a solo project that is just you, and you are the only person you need to trust. But solo projects grow into team projects, and the hooks you put in on day one set the culture for the team later.

Hook 1: Pre-Commit Lint and Type Check

This is the most important one. Every commit runs the linter and the type checker on the staged files only.

I use lint-staged for this because it handles the “only check what I staged” part for me. The hook is three lines.


#!/bin/sh
# .git/hooks/pre-commit
npx lint-staged

The lint-staged config lives in package.json.


{
  "lint-staged": {
    "*.{ts,tsx}": ["eslint --fix", "tsc --noEmit"],
    "*.{css,scss}": ["stylelint --fix"],
    "*.md": ["prettier --write"]
  }
}

The tsc –noEmit run is the part most setups skip. Eslint catches syntax and style issues, but only tsc catches “you passed a string where a number was expected” across three files. Running it on every commit adds 2 to 4 seconds and catches the exact class of bug that takes 15 minutes to debug from a CI log.

One gotcha. tsc –noEmit without project flags runs against the whole project, not just staged files. For small projects that is fine. For anything over 200 files, use tsc –project tsconfig.json –incremental and cache the build output. The incremental flag cuts subsequent runs from 8 seconds to under 1.

Hook 2: Commit-Msg Conventional Commits Regex

Every commit message in my repos follows conventional commits. feat:, fix:, chore:, docs:, refactor:, test:, perf:. Not because a style guide told me to, but because my release script parses commit messages to auto-generate the changelog.

The hook is a regex check.


#!/bin/sh
# .git/hooks/commit-msg
commit_msg=$(cat "$1")
pattern="^(feat|fix|chore|docs|refactor|test|perf|style|ci|build|revert)(([a-z0-9-]+))?: .{3,}$"

if ! echo "$commit_msg" | grep -qE "$pattern"; then
  echo "Commit message must follow conventional commits format:"
  echo "  feat: add login button"
  echo "  fix(auth): handle expired tokens"
  echo ""
  echo "Your message: $commit_msg"
  exit 1
fi

This catches “updated stuff” and “wip” commits before they happen. The downstream benefit is that my release-please or changesets config can read the log and build a proper changelog automatically. I have not manually written a CHANGELOG entry in 18 months.

Yes, you can bypass this with –no-verify. I do it maybe once a month for a genuine emergency commit. The hook does not need to be bulletproof. It needs to be annoying enough that I write better commit messages by default.

Hook 3: Post-Merge Dependency Sync

This is the hook that saves me the most frustration per month. When I pull changes that modify package.json or package-lock.json, the post-merge hook reinstalls dependencies automatically.


#!/bin/sh
# .git/hooks/post-merge
changed_files="$(git diff-tree -r --name-only --no-commit-id ORIG_HEAD HEAD)"

check_run() {
  echo "$changed_files" | grep --quiet "$1" && eval "$2"
}

check_run package.json "npm install"
check_run package-lock.json "npm install"
check_run bun.lockb "bun install"
check_run pnpm-lock.yaml "pnpm install"

The old Monday ritual was: pull latest, run the dev server, watch it crash because someone added a dependency last week. Then run npm install, wait 40 seconds, try again. This hook makes that automatic. The dev server starts clean every time.

It also works for checkout. Add the same logic to post-checkout and branch switches never leave you with the wrong dependency tree.

Hook 4: Pre-Commit Secret Scan

This is the one that has saved me from public embarrassment at least twice. A grep-based scan of staged files for common secret patterns.


#!/bin/sh
# part of .git/hooks/pre-commit

patterns=(
  "AKIA[0-9A-Z]{16}"
  "sk-[a-zA-Z0-9]{32,}"
  "ghp_[a-zA-Z0-9]{36}"
  "xox[baprs]-[0-9a-zA-Z-]+"
  "-----BEGIN (RSA |DSA |EC |OPENSSH )?PRIVATE KEY"
)

for pattern in "${patterns[@]}"; do
  if git diff --cached | grep -qE "$pattern"; then
    echo "BLOCKED: Possible secret detected matching pattern: $pattern"
    echo "If this is a false positive, use git commit --no-verify"
    exit 1
  fi
done

The patterns cover AWS access keys, OpenAI keys, GitHub personal access tokens, Slack tokens, and SSH private keys. That is not exhaustive. For stricter scanning, use gitleaks or trufflehog as the pre-commit command. For a solo project where I mostly just need to catch “oh no I pasted my API key into a config file” the grep version is enough.

The reason this matters. Even if you delete the secret in the next commit, the old commit is still in git history. Rotating the key is the only real fix, and rotating an API key at 11pm on a Sunday because a crawler scraped your public repo is a bad time. Better to block the commit.

Hook 5: Pre-Push Branch Protection

This is a simple check that prevents me from pushing directly to main.


#!/bin/sh
# .git/hooks/pre-push
protected_branch="main"
current_branch=$(git symbolic-ref HEAD | sed -e 's,.*/(.*),1,')

if [ "$current_branch" = "$protected_branch" ]; then
  echo "Direct push to $protected_branch is blocked."
  echo "Create a feature branch and open a PR instead."
  echo "Use git push --no-verify to override in emergencies."
  exit 1
fi

On solo projects this sounds overkill. It is not. The times I broke production were always direct pushes to main on a Friday evening when I was tired. A one-line speed bump forces me to at least create a branch, which forces me to at least pause and think about whether this needs a PR review from future-me.

Bypass with –no-verify for the genuine emergency fixes. The friction is the point.

Hook 6: Post-Commit CLAUDE.md Reminder

This is the most RAXXO-specific hook, but the pattern generalizes. If a commit touches specific files that come with a “please also update X” obligation, the post-commit hook reminds me.


#!/bin/sh
# .git/hooks/post-commit
last_commit_files=$(git show --pretty="" --name-only HEAD)

if echo "$last_commit_files" | grep -q "package.json"; then
  echo ""
  echo "REMINDER: package.json changed. Update CLAUDE.md if you added a new dependency you want Claude to know about."
fi

if echo "$last_commit_files" | grep -q "hooks/"; then
  echo ""
  echo "REMINDER: hooks folder changed. Update the hook count in CLAUDE.md."
fi

if echo "$last_commit_files" | grep -qE ".env"; then
  echo ""
  echo "WARNING: You committed a file matching .env pattern. Double-check this was intentional."
fi

In a team setting you would use this for “you changed the schema, update the API docs” or “you added a migration, update the runbook”. The point is that the reminder runs in your terminal, right after the commit, when the context is still fresh.

Bottom Line

Six hooks, one folder, copied into every new repo. The total setup is maybe 80 lines of shell, committed to a dotfiles repo that I clone into new projects with one command.


curl -sL https://raw.githubusercontent.com/yourname/dotfiles/main/install-git-hooks.sh | bash

That is it. No framework, no Husky config, no pre-commit.com yaml file. Just shell scripts in a folder that git already knows to run.

The compounding benefit is that every project, from day one, has the same guardrails. The commit messages are clean, the type checks run, the secrets stay out, and dependencies stay in sync. I have not personally remembered to run npm install after a pull in two years. That is time I get back for work that actually matters.

If you already use Husky or lint-staged, you are 80 percent of the way there. Move the last 20 percent into a plain hooks folder and you will stop fighting your tooling.

Can Claude Code migrate VanillaJS/HTML/CSS to Preact/Tailwind?

In my last post, I introduced LinkedIn Secret Weapon, the Chrome Extension I built with Claude Code to supercharge my LinkedIn workflow.

As I mentioned, the app was built almost entirely with Claude Code – I had no background or knowledge about building browser extensions. I just wanted the tool, and AI built it, and it worked! But now I want to work on expanding it, adding a backend to store the actions a user takes, getting my hands a little more dirty with the code.

So, inspired by this article on building a Chrome extension with modern frontend tooling, I decided to try porting it to use React, Typescript, Tailwind, and Vite. A few years ago this would have been a relatively daunting task for a side project, but this is 2026, so AI can probably do it for me, right? Mostly right!

First of all, I was concerned that React would add a lot of overhead for a Chrome Extension, which should be pretty light and quick. So I decided to use Preact, which, as its homepage states, is a “fast 3kB alternative to React with the same modern API.”

So I pointed Claude at the article:

Using this webpage as a guide, migrate this app to the stack in the guide, but use Preact instead of React. Make incremental, well-organized commits.

It made these 3 commits:

  • Phase 1: Setup Vite, CRXJS, TypeScript, Preact, and Tailwind
  • Phase 2: Migrate content, background, and options to TypeScript
  • Phase 3: Fix Vite and Tailwind configuration

Did it run? No! It was trying to import TS files from manifest.json, which simply didn’t work. I asked Claude for help. This was a problem it couldn’t quite fix, but as usual the problem is it didn’t realize it. It kept saying things like “Ah, I see the problem!” and “Okay, now it’s fixed”, and it was definitely not fixed. It suggested things like developing against npm run build -- --watch which is pretty ridiculous. So I had to figure this one out for myself!

I looked at the manifest file and the vite config, and it looked like it added a whole bunch of stuff that maybe didn’t need to be there. I headed over to the docs for CRXJS and compared their file tree to mine. They had a much barer vite config, as well as a manifest.config.ts file. I basically just pasted in CRXJS’s vite config, and was able to load the unpacked extension into Chrome without error.

The code had a bunch of type errors, but, the extension worked! Well, to be fair, the Popup worked – I wasn’t trying the Options page yet. But, if I remember correctly, the functionality also worked (copying, clicking, etc).

However, the styles were not quite right:

Broken styling after Claude's migration

Here’s how the popup looked the first time I got the Preact version running
Also, I noticed that sometimes the popup would take a long time to load – a number of seconds, at times. I’m pretty sure these times were mainly when first loading the extension, or perhaps after an update. That is definitely bad UX and I’ll need to address it.

So next, I had to fix the styling. Could AI help? I’ll let you know soon!

WCAG: Making the internet more accessible

We at Centro Labs recently finished making LocalMate WCAG 2.2 Level AA Compliant.
It’s not something that shows up on a demo reel or has any flashy consequences, but it’s one of the more meaningful things we have shipped. Here’s how we did it and what we found out while doing it.

What WCAG actually is

The Web Content Accessibility Guidelines (or just WCAG) are international standards developed by the W3C Web Accessibility Initiative (WAI) to ensure digital contents are accessible to anyone, including people with disabilities.
It applies to both web applications and mobile applications.

POUR Principles

It’s centered around the four core POUR principles:

Perceivable

Information must be presentable to users in ways they can sense. This can be: Alt-Texts on images, aria labels, captions, etc.

Operable

UI components and navigation must be operable. This means: being able to tab through the content of the website, having enough time to read the contents on it, etc.

Understandable

Information and operation must be understandable, meaning: Readable text, predictable functionalities, etc.

Robust

Content must be robust enough to be interpreted by a wide variety of user agents, including assistive technologies.

Conformance Levels

The standard is split up into three levels of compliance:

  • A: Minimum level of accessibility
  • AA: Medium level of accessibility. This is often required for applications in the public sector, which is sadly not met by lots of applications.
  • AAA: Maximum level of accessibility

Why it matters beyond compliance

Usually when people pitch WCAG-Compliance they talk about government guidelines, legal requirements, etc. All of that is true, but its importance runs deeper than that: Roughly 15%-20% of the population has some disability, affecting how they navigate and experience the web.
A much larger group than that has situational impairments: Navigating a website on a dark screen, using a service with a sprained wrist, using a service in a second or third language.
Accessibility is not only for the permanently impaired, but also for temporary constraints in how they use those digital services.

For LocalMate specifically the reasoning is clear: It’s a service that makes information and services accessible to anyone. Not complying with the WCAG AA guidelines is a contradiction to our mission.

What most websites get wrong

Designers and developers often work on their applications in a controlled environment: On a desktop with bright screens, familiarity with the language and with a deep understanding on how the UX was designed, I mean they were present when the application was written.
Below are a few key requirements that are often neglected.

1. Contrast ratios

The rule: Body text needs a contrast ratio of at least 4.5:1 against its background. For larger text (18pt+ or 14pt+ on bold text) this is eased to 3:1. This is success criterion 1.4.3.
Before we were compliant our AI disclaimer in the footer was a secondary light text on a grey background:
Footer with light grey AI disclaimer
This is a contrast ratio of 1.75:1, making it not compliant. We swapped this for darker text, achieving compliance.

2. 400% zoom without horizontal scroll

The success criterion 1.4.10 says, that on a viewport of at least 1280px in width the website must be usable at a 400% zoom.
Most desktop-first designs don’t comply: Sidebars, overflowing content, etc.
We were also not compliant before, having the input overflow and making the application unusable:
LocalMate chat input going off to the side

Fixing this took a bit of UX-Engineering: We turned off the minimum width on the input box and moved the “Open a new chat” button to the top of the page, disconnected from the input itself:

LocalMate chat input fully contained in width

The easiest way to tell if your website is compliant: Try using it after zooming in to 400%.
This is what the cover image of the blog article is. The “ugly” input conserves both width and height, by getting rid of most padding on very small screens.

3. Missing or lying ARIA labels

ARIA stands for Accessible Rich Internet Applications. They are the aria-label tags that inputs and buttons often have.
They add semantic annotations for assistive technologies.

Things that are frequently missed are:

  • Icon-only buttons have no accessible names. They should all get an aria-label, which describes the action.
  • Inputs with placeholders as labels. Those should also get an accompanying label.

4. Missing focus states and keyboard traps

If you can’t see where the keyboard focus is, you can’t use the app with a keyboard, making it non-compliant with guideline 2.4.7.
The browser default is often removed by designers (using outline: none, because it’s ugly).
For compliance reasons we show an outline and, if applicable, show the hover-labels:

Send message button with focus styling

Keyboard traps

A keyboard trap is when a tab-cycle gets stuck in a loop, like tabbing in the footer and never getting out.
This is a no-go. For people that cannot use a mouse something like this means: Refreshing the page every time they get trapped.
This can be corrected by introducing a tabbing order on a website, even though most of the time, this should be handled by the browser.

The point

I could go on for days about other things that are frequently non-compliant, but the core message is: Don’t just expect that your work is accessible, make sure it is.

Tools that helped us

Doing this manually and finding all the weak spots is annoying and takes tons of time. Luckily, there are tools to help.

Lighthouse reports

Google has a website called PageSpeed Insights. While it also covers SEO and first contentful paint, it offers a section called Accessibility, where some issues get caught.

axe DevTools

axe DevTools is a browser plugin, that allows running a diagnostic on your localhost page, catching issues before they ever reach production.
It doesn’t catch all issues, but quickly points out quick-wins and obvious problems, like missing contrast.

AI Agents

Let’s be honest: Most of the heavy lifting can be done by AI Agents. They can quickly scan through the entire codebase, use playwright MCPs to figure out the tabbing order, etc.
Running Claude Code in plan mode to find, explain and correct all mistakes gets you about 90% of the way there.

The uncomfortable takeaway

Most web apps aren’t WCAG AA compliant because accessibility is hard to see and easy to defer. There are no actionable bug reports from the users that get left behind, because they got frustrated and left. There is almost no marketing upside, because it hardly translates into a nice pitch deck and it slows down features.
The way I have come to think about it: WCAG is a proxy, for whether a team takes non-ideal users seriously. They don’t think about the person who forgot their glasses, uses the app on a phone while on the bus or someone whose first language isn’t the one in the UI. If the response to any of those is: “Well I don’t know, it kind of works”, the product is shipping a product that divides.

Making LocalMate WCAG AA compliant took real engineering and UX time. It also surfaced lots of small quality issues that improved the product for everyone, not just users with disabilities.

This article was originally published on the Centro Labs Blog, check it out there!