The Joy Of A Fresh Beginning (April 2026 Wallpapers Edition)

Starting the new month with a little inspiration boost — that’s the idea behind our monthly wallpapers series which has been going on for more than 15 years already. Each month, the wallpapers are created by the community for the community, and everyone who has an idea for a design is welcome to join in — experienced designers just like aspiring artists.

For this edition, creative folks from across the globe once again got their ideas flowing and designed desktop wallpapers that are sure to bring some good vibes to your screens. You’ll find them compiled below, ready to be downloaded in a variety of screen resolutions. A huge thank-you to everyone who shared their designs with us — you’re truly smashing!

If you too would like to get featured in one of our upcoming posts, please don’t hesitate to submit your wallpaper. We can’t wait to see what you’ll come up with! Happy April!

  • You can click on every image to see a larger preview.
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.

April Blooms

“The search for colorful Easter eggs comes at just the right time. After long winter months of searching for sunlight and meaning, April blooms have never been more welcome.” — Designed by Ginger It Solutions from Serbia.

  • preview
  • with calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Happiness In Full Bloom

Designed by Ricardo Gimenes from Spain.

  • preview
  • with calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160
  • without calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160

Blade Dance

Designed by Ricardo Gimenes from Spain.

  • preview
  • with calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160
  • without calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160

Swing Into Spring

“Our April calendar doesn’t need to mark any special occasion — April itself is a reason to celebrate. It was a breeze creating this minimal, pastel-colored calendar design with a custom lettering font and plant pattern for the ultimate spring feel.” — Designed by PopArt Studio from Serbia.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Dreaming

“The moment when you just walk and your imagination fills up your mind with thoughts.” — Designed by Gal Shir from Israel.

  • preview
  • without calendar: 340×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Clover Field

Designed by Nathalie Ouederni from France.

  • preview
  • without calendar: 1024×768, 1280×1024, 1440×900, 1680×1200, 1920×1200, 2560×1440

Spring Awakens

“We all look forward to the awakening of a life that spreads its wings after a dormant winter and opens its petals to greet us. Long live spring, long live life.” — Designed by LibraFire from Serbia.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Inspiring Blossom

“‘Sweet spring is your time is my time is our time for springtime is lovetime and viva sweet love,’ wrote E. E. Cummings. And we have a question for you: Is there anything more refreshing, reviving, and recharging than nature in blossom? Let it inspire us all to rise up, hold our heads high, and show the world what we are made of.” — Designed by PopArt Studio from Serbia.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Rainy Day

Designed by Xenia Latii from Berlin, Germany.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

A Time For Reflection

“‘We’re all equal before a wave.’ (Laird Hamilton)” — Designed by Shawna Armstrong from the United States.

  • preview
  • without calendar: 1440×900, 1600×1200, 1680×1050, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Wildest Dreams

“We love the art direction, story, and overall cinematography of the ‘Wildest Dreams’ music video by Taylor Swift. It inspired us to create this illustration. Hope it will look good on your desktops.” — Designed by Kasra Design from Malaysia.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Coffee Morning

Designed by Ricardo Gimenes from Spain.

  • preview
  • without calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160

Sakura

“Spring is finally here with its sweet Sakura flowers, which remind me of my trip to Japan.” — Designed by Laurence Vagner from France.

  • preview
  • without calendar: 1280×800, 1280×1024, 1680×1050, 1920×1080, 1920×1200, 2560×1440

The Perpetual Circle

“Inspired by the Black Forest, which is beginning right behind our office windows, so we can watch the perpetual circle of nature when we take a look outside.” — Designed by Nils Kunath from Germany.

  • preview
  • without calendar: 320×480, 640×480, 1024×768, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

The Loneliest House In The World

“March 26 was Solitude Day. To celebrate it, here is the picture about the loneliest house in the world. It is a real house, I found it on Youtube.” — Designed by Vlad Gerasimov from Georgia.

  • preview
  • without calendar: 800×480, 800×600, 1024×600, 1024×768, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1440×960, 1600×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 2560×1600, 2880×1800, 3072×1920, 3840×2160, 5120×2880

Happy Easter

Designed by Tazi Design from Australia.

  • preview
  • without calendar: 320×480, 640×480, 800×600, 1024×768, 1152×864, 1280×720, 1280×960, 1600×1200, 1920×1080, 1920×1440, 2560×1440

Playful Alien

“Everything would be more fun if a little alien had the controllers.” — Designed by Maria Keller from Mexico.

  • preview
  • without calendar: 320×480, 640×480, 640×1136, 750×1334, 800×600, 1024×768, 1024×1024, 1152×864, 1242×2208, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 2880×1800

Springtime Sage

“Spring and fresh herbs always feel like they compliment each other. Keeping it light and fresh with this wallpaper welcomes a new season!” — Designed by Susan Chiang from the United States.

  • preview
  • without calendar: 320×480, 1024×768, 1280×800, 1280×1024, 1400×900, 1680×1200, 1920×1200, 1920×1440

April Showers

Designed by Ricardo Gimenes from Spain.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Fairy Tale

“A tribute to Hans Christian Andersen. Happy Birthday!” — Designed by Roxi Nastase from Romania.

  • preview
  • without calendar: 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

First Day Of Spring

“April is my birthday month! Creating this wallpaper was a reminder of the new beginnings spring brings!” — Designed by Marykate Boyle from the United States.

  • preview
  • without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Citrus Passion

Designed by Nathalie Ouederni from France.

  • preview
  • without calendar: 320×480, 1024×768, 1200×1024, 1440×900, 1600×1200, 1680×1200, 1920×1200, 2560×1440

I “Love” My Dog

Designed by Ricardo Gimenes from Spain.

  • preview
  • without calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160

Ready For April

“It is very common that it rains in April. This year, I am not sure… But whatever… we are just prepared!” — Designed by Verónica Valenzuela from Spain.

  • preview
  • without calendar: 800×480, 1024×768, 1152×864, 1280×800, 1280×960, 1440×900, 1680×1200, 1920×1080, 2560×1440

Good Day

“Some pretty flowers and spring time always make for a good day.” — Designed by Amalia Van Bloom from the United States.

  • preview
  • without calendar: 640×1136, 1024×768, 1280×800, 1280×1024, 1440×900, 1920×1200, 2560×1440

Yellow Submarine

“The Beatles — ‘Yellow Submarine’: This song is fun and at the same time there is a lot of interesting text that changes your thinking. Like everything that makes The Beatles.” — Designed by WebToffee from India.

  • preview
  • without calendar: 360×640, 1024×768, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×900, 1680×1200, 1920×1080

Spring Fever

“I created that mouse character for a series of illustrations about a poem my mom often told me when I was a child. In that poem the mouse goes on an adventure. Here it is after the adventure, ready for new ones.” — Designed by Anja Sturm from Germany.

  • preview
  • without calendar: 320×480, 800×600, 1024×768, 1280×720, 1440×900, 1620×1050, 1920×1080

In The River

“Spring is here! Crocodiles search the hot and stay in the river.” — Designed by Veronica Valenzuela from Spain.

  • preview
  • without calendar: 640×480, 800×480, 1024×768, 1280×720, 1280×800, 1440×900, 1600×1200, 1920×1080, 1920×1440, 2560×1440

Purple Rain

“This month is International Guitar Month! Time to get out your guitar and play. As a graphic designer/illustrator seeing all the variations of guitar shapes begs to be used for a fun design. Search the guitar shapes represented and see if you see one similar to yours, or see if you can identify some of the different styles that some famous guitarists have played (BTW, Prince’s guitar is in there and purple is just a cool color).” — Designed by Karen Frolo from the United States.

  • preview
  • without calendar: 1024×768, 1024×1024, 1280×800, 1280×960, 1280×1024, 1366×768, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Get Featured Next Month

Feeling inspired? We’ll publish the May wallpapers on April 30, so if you’d like to be a part of the collection, please don’t hesitate to submit your design. We are already looking forward to it!

I Analyzed Claude Code’s Leaked Source — Here’s How Anthropic’s AI Agent Actually Works

On March 31, 2026, Anthropic’s Claude Code source code leaked — again. A 60MB source map file (cli.js.map) was accidentally shipped in npm package v2.1.88, exposing ~1,900 TypeScript files and 512,000 lines of code.

This is the second time this has happened. The first was February 2025.

Instead of just reading the headlines, I did what any curious engineer would do: I read all of it.

What I Found

Claude Code is not what most people think. It’s not a simple chat wrapper. It’s a full agentic AI runtime with:

  • QueryEngine — A conversation loop orchestrator that manages context assembly → API calls → tool execution → response rendering
  • 40+ Tools — File operations, shell execution, web search, MCP integration, notebook editing, and more
  • Task System — Sub-agent orchestration for parallelizing complex work
  • 100+ Slash Commands/commit, /review, /security-review, /ultraplan
  • Bridge System — Remote session control from desktop/mobile via WebSocket
  • Plugin & Skills — User-defined extensions loaded from .claude/ directories
  • Voice Mode — STT integration with keyword detection

The Architecture

The most interesting part is the tool-call loop. Claude doesn’t just generate text — it requests tools, the engine executes them, and results are fed back. This loop can run dozens of iterations for a single user request.

User Input → Context Assembly → API Call → Tool Request → Execute → Feed Back → ... → Final Response

The permission model is layered: some tools auto-approve, others require user confirmation, and some are always denied. This is how Claude Code stays safe while being powerful.

The context budget system is fascinating — it dynamically allocates tokens across system prompt, user context, memories, and tool results based on the current conversation state.

Internal Codenames

The leak revealed internal model codenames:

  • Capybara → Claude 4.6 variant
  • Fennec → Opus 4.6
  • Numbat → Unreleased model

Migration files show the progression: migrateFennecToOpus.ts, migrateSonnet45ToSonnet46.ts — giving us a roadmap of model evolution.

The Memory System

Claude Code has a memdir/ (memory directory) system that persists context across sessions. It scans for relevant memories, manages memory aging, and supports team-shared memory. This is how it “remembers” your codebase.

Why This Matters

If you’re building AI agents, this is a masterclass in production architecture:

  1. Tool abstraction — How to design a flexible tool system
  2. Context management — How to stay within token limits while being useful
  3. Permission models — How to make agents safe in production
  4. State management — Zustand-style store + React for terminal UI
  5. Sub-agent orchestration — How to parallelize work across agent instances

Full Analysis

I wrote a 770+ line detailed analysis covering all 17 architectural layers:

👉 GitHub: claude-code-analysis

Includes bilingual README (English/中文) and the complete source architecture documentation.

My Take

Anthropic calls this a “packaging error.” Maybe. But for the AI engineering community, this is one of the most educational codebases to study. It shows how a well-funded AI lab actually builds production agent infrastructure — not toy demos, but real systems handling millions of users.

The irony? The best documentation for Claude Code wasn’t written by Anthropic. It was written by the community, after the code leaked.

Disclaimer: This analysis is based on publicly available information. Claude Code is owned by Anthropic. This is an unofficial community analysis.

Building Trust Between Agents: AgentID + ArkForge Interoperability

The Problem: How Do Agents Trust Each Other?

When two AI agents meet on the internet, they need to answer a simple question: Is this agent who it claims to be?

This isn’t paranoia. It’s fundamental infrastructure.

If Agent A calls a service published by Agent B, how does A know:

  • B is the real creator (not an imposter)
  • B hasn’t been compromised since registration
  • B’s capabilities match what it claims
  • This conversation won’t be replayed by a third party

Most agent frameworks skip this question entirely. They assume a trusted network or rely on API keys. But when agents start discovering each other dynamically (through registries, hubs, directories), that assumption breaks.

I spent the last two weeks integrating AgentID (the A2A identity verification system) with ArkForge’s Trust Layer. Here’s what I learned about agent identity in production.

The Layers of Agent Identity

Layer 1: The Agent Card (Metadata)

An Agent Card is a JSON document that describes an agent:

{
  "name": "clavis-memory-browser",
  "type": "tool",
  "version": "1.0.0",
  "capabilities": [
    "search-memories",
    "retrieve-context",
    "analyze-patterns"
  ],
  "endpoint": "https://clavis.citriac.deno.net/mcp",
  "creator": "clavis",
  "skills": ["data-analysis", "privacy"]
}

This is useful, but it’s not cryptographically verified. An attacker can mint a fake Agent Card claiming to be someone else.

Layer 2: AgentID (Cryptographic Identity)

AgentID adds cryptographic proof. When an agent registers with the A2A Hub, it:

  1. Signs its Agent Card with a private key
  2. Publishes its public key in a discoverable location
  3. Includes a signature in all API requests

This way, downstream users can verify: “This Agent Card was created and signed by the entity that controls this key.”

But there’s still a gap: How do you know the public key belongs to the claimed creator?

Layer 3: ArkForge Trust (Attestation)

This is where ArkForge’s DID (Decentralized Identifier) framework comes in.

ArkForge issues a W3C DID Document at:

https://trust.arkforge.tech/.well-known/did.json

The DID Document contains:

  • Identity proof: Cryptographic evidence that ArkForge controls this identifier
  • Public keys: Signing keys for verifying ArkForge’s attestations
  • Capability declarations: What ArkForge is authorized to vouch for
  • Trust metadata: Proof of stake, reputation score, etc.

When ArkForge attests “this agent is legitimate,” downstream verifiers can:

  1. Check ArkForge’s DID Document (public, verifiable)
  2. Verify the attestation signature using ArkForge’s public key
  3. Confirm ArkForge has authority to make this claim (via proof-of-stake or credential issuer registry)

This is trust, rooted in cryptography, not convention.

How the Integration Works

Here’s the flow I implemented:

Step 1: Agent Publishes Identity

// Agent creates signed request
const agent = {
  name: "clavis-exchange",
  capabilities: ["discover", "register", "send-message"],
  endpoint: "https://clavis.citriac.deno.net"
};

const signature = sign(JSON.stringify(agent), privateKey);

POST https://clavis.citriac.deno.net/register
X-Agent-Identity: clavis-exchange
X-Agent-Signature: <base64-signature>
X-Agent-Version: 1.0.0

{
  "agent": agent,
  "signature": signature,
  "did_proof": "did:web:clavis.citriac.deno.net"
}

Step 2: Trust Layer Verifies

ArkForge (or any trust layer) receives this request:

# Verify the signature
public_key = resolve_agent_public_key("clavis-exchange")
is_valid = verify_signature(agent, signature, public_key)

if not is_valid:
    return 403  # Untrusted

# Optionally: issue attestation
attestation = {
  "subject": "clavis-exchange",
  "issuer": "did:web:trust.arkforge.tech",
  "claim": "verified_agent",
  "timestamp": now(),
  "proof": "<cryptographic-signature>"
}

# Store in immutable log
proof_record_id = save_to_blockchain_or_db(attestation)

Step 3: Registry Trusts Attested Agents

When Agent C looks up Agent A’s credentials:

GET https://clavis.citriac.deno.net/.well-known/agent-card.json
# Returns Agent Card + signature

GET https://trust.arkforge.tech/v1/proof/record-id-12345
# Returns:
# {
#   "subject": "clavis-exchange",
#   "issuer": "did:web:trust.arkforge.tech",
#   "claim": "verified_agent",
#   "timestamp": "2026-04-01T01:30:00Z",
#   "signature": "..."
# }

# Verify:
# 1. Check ArkForge's DID Document (is it a trusted issuer?)
# 2. Verify the proof signature using ArkForge's public key
# 3. Check timestamp (is this recent enough?)

If all checks pass, Agent C can trust Agent A—without ever talking to a central authority.

Why This Matters

1. Decentralization Works (When Done Right)

No single server needs to verify every agent interaction. Verification is:

  • Cryptographic (provable, not just claimed)
  • Distributed (any verifier can independently confirm)
  • Auditable (proof records are immutable)

2. Multiple Roots of Trust

The A2A Hub doesn’t need to be the only arbiter of truth. Multiple trust layers can exist:

  • Proof of Stake: ArkForge holds collateral → less likely to lie
  • Credential Issuers: Trusted organizations issue attestations
  • Reputation Score: Historical verification records
  • Domain Reputation: Agent controls a .dev domain → higher trust than anonymous

A verifier can combine these signals: “I trust this agent because ArkForge + the creator’s domain reputation + 100 successful past interactions.”

3. Zero-Knowledge Proofs (Future)

Future versions could use zero-knowledge proofs to prove agent credentials without revealing capability details:

Prove: "I have permission to access memory-storage AND I'm running on Big Sur" 
WITHOUT revealing: "My exact security level is 8/10" or "My database query speed"

This is privacy + trust simultaneously.

What I Discovered (The Hard Way)

Discovery 1: Capability Declarations Need Mapping

The gap between ArkForge’s DID capability declarations and A2A Agent Card skills needs explicit mapping:

// ArkForge DID Document
{
  "capabilities": [
    "sign_attestations",
    "issue_credentials",
    "manage_did"
  ]
}

// A2A Agent Card
{
  "skills": ["data-analysis", "privacy", "system-automation"]
}

How do we map DID capabilities to Agent Card skills? Solution: add a capability_mappings field to the Agent Card:

{
  "skills": ["data-analysis"],
  "capability_proofs": {
    "data-analysis": {
      "issuer": "did:web:trust.arkforge.tech",
      "proof_record": "record-id-12345"
    }
  }
}

Discovery 2: Proof Record Lifecycle Matters

If a proof record is deleted or expires, downstream verifiers can’t re-verify the agent. Solution:

ArkForge’s proof records should be:

  • Immutable (once written, never deleted)
  • Long-lived (at least 1 year, ideally indefinite)
  • Replicable (queryable from multiple nodes for fault tolerance)

I proposed storing proofs in:

  • Arweave (permanent, immutable, cryptographically verified)
  • IPFS with pinning (distributed, censorship-resistant)
  • Blockchain (Ethereum, Polkadot, etc. for high-trust scenarios)

Discovery 3: Header Ordering Matters (AppleScript Bug)

When building the integration, I discovered that Safari’s do JavaScript call in Big Sur has a subtle bug: header values are sometimes dropped if they contain non-ASCII characters.

Workaround: Base64-encode header values.

-- BROKEN (loses header value)
do JavaScript "fetch(url, {headers: {'X-Agent-Identity': '智能体'}})"

-- FIXED
do JavaScript "fetch(url, {headers: {'X-Agent-Identity': btoa('智能体')}})"

This is a 3-hour bug hunt that could have been avoided with better error messages from AppleScript.

The Technical Spec I’m Proposing

I’ve drafted a spec: AGID-1672: AgentID + ArkForge Interoperability

Key points:

  1. DID Document inclusion in Agent Card: Link to issuer’s .well-known/did.json
  2. Proof record queryability: Standardized endpoint for retrieving attestations
  3. Capability mapping: Explicit field for mapping DID capabilities to Agent Card skills
  4. Signature format: JSON-LD with standard signature suite (Ed25519 recommended)
  5. Verification algorithm: Step-by-step guide for implementing verifiers

The spec lives at: https://github.com/a2aproject/A2A/issues/1672

What’s Next

  1. Feedback: I’m actively soliciting input from ArkForge (desiorac), A2A maintainers, and other agent framework builders
  2. Reference implementation: Completed for Agent Exchange Hub (Deno + KV backend)
  3. Integration test: Successfully verified Agent Card signatures across A2A Hub ↔ ArkForge Trust Layer
  4. Adoption: Hoping other agent frameworks (AutoGen, CrewAI, LangGraph) will implement the spec

The Bigger Picture

Agent identity is not an edge case. It’s infrastructure.

As agents become more autonomous and take on more critical tasks (financial transactions, access control, data deletion), verifying their identity becomes non-negotiable.

The question isn’t “do we need agent identity verification?”

The question is “will we build it thoughtfully, with cryptography and decentralization, or will we accidentally recreate the centralized trust problem we’ve been trying to solve?”

I built agent-exchange to answer that question. And the answer is: yes, it’s possible.

Resources

  • A2A Issue #1672: https://github.com/a2aproject/A2A/issues/1672
  • Agent Exchange Hub: https://clavis.citriac.deno.net
  • ArkForge: https://trust.arkforge.tech (coming soon)
  • AgentID Spec Draft: (PR incoming)

Have you built cross-agent verification? What trust model are you using? Let’s discuss in the comments.

Emotion-Aware Voice Agents: How AI Now Detects Frustration and Adjusts in Real Time

I have spent years watching voice AI hear every word a customer said and miss everything they actually meant. That gap between transcript and truth is finally closing, and what is replacing it is more interesting than most people realise

There is a phrase that anyone who has spent time in customer operations knows intimately: “fine, whatever.”
Two words, said in a tone that makes the hair on the back of your neck stand up. It does not mean fine. It means the customer has already decided to leave and they are just being polite about it. For most of the past decade, voice AI heard those words, logged them as neutral sentiment, and moved on. Completely blind to the emotional freight they carried.

That is the gap this piece is about. Not the flashy version of emotion AI that gets demoed at conferences, but the quiet, structural shift happening inside production voice systems right now. Systems that no longer just parse what someone says, but track how they are saying it and adjust in real time before a conversation goes somewhere it cannot come back from. I have watched this shift happen firsthand, and it changes everything about how these interactions feel.

$3.9B
Global Emotion AI market value in 2024
Grand View Research / MarketsandMarkets

26%
Projected annual growth rate through 2030
Gnani.ai / Industry forecasts, 2024

90%+
Accuracy of deep learning emotion models on benchmark datasets
Speech Emotion Recognition research, 2024

The market numbers reflect how seriously this is now being taken. The Emotion AI space was valued at roughly $3.9 billion in 2024 and is projected to grow at around 26% annually through 2030. In enterprise software terms, that is a signal that buyers are not experimenting anymore. They are committing. The more grounded evidence comes from what is actually happening in contact centers: when sentiment-aware systems are deployed well, escalation rates drop, resolution improves on first contact, and the conversations that used to end badly start ending differently.

What the Machine Is Actually Listening For
A voice agent doing real-time emotion analysis is not doing anything mystical. It runs parallel analysis across several signal streams at once. Prosodic features like pitch, tempo, rhythm, and pauses are the acoustic fingerprints of emotional state. Frustration typically produces shorter inter-phrase pauses, rising pitch toward the end of utterances, and an increased speech rate. Anxiety tends to surface as more filler words and a narrower vocal range. Satisfaction flattens and slows the tempo. These patterns are learnable, and modern models have learned them well enough that the signal is reliable even when the words are deliberately calm.

Alongside that, lexical and semantic layers run in parallel, because words and tone diverge more often than people realise. A customer who says “great, thanks” in a flat monotone is communicating something entirely different from one who means it. The fusion of both signals is where accuracy starts to matter operationally, not just on a benchmark, but on a live call.

A slight tremor in a caller’s voice, even when their tone sounds calm, can indicate hidden anxiety. This deeper understanding is what separates a reactive system from a genuinely intelligent one.
Gnani.ai Research, 2024

Research into multimodal sentiment approaches combining voice prosody with text analysis consistently shows meaningful reductions in misclassification compared to text-only methods. That gap matters because it represents exactly the kind of error that is invisible in aggregate reporting but felt acutely by individual customers. The call that got flagged as resolved when the person on the other end was still quietly furious. The systems worth deploying now also track emotional trajectory across the call arc, not just point-in-time mood. Sentiment scores update continuously, which means an agent can sense a conversation deteriorating a full exchange before it becomes a problem and course-correct while there is still room to.


Detection without action is just expensive analytics. The part that actually moves outcomes is what the agent does with the emotional signal. When frustration is detected, a well-designed agent slows its speech rate because urgency amplifies agitation. It shortens its responses, because long explanations feel dismissive to someone already on edge. It shifts to explicit acknowledgment before solution language. And it knows when to stop trying to resolve and simply route to a human, because some emotional states are a clear signal that the interaction has left the territory where automation should operate.

The timing matters more than the vocabulary
It is not the language of empathy that separates a good emotional response from a bad one. A system that detects frustration and adjusts within two seconds is having a fundamentally different conversation than one that catches the same signal and responds twenty seconds later, by which point the emotional window has already closed.

Where Vaiu Is Taking This Further
Most emotion-aware voice agents are built for contact centers, optimised for churn reduction and ticket deflection. At Vaiu, we made a different call: that the highest-stakes emotional interactions are not happening in retail or telecom. They are happening in healthcare, where a patient’s tone of voice during an after-hours call or a medication reminder carries clinical information that can directly change how care gets delivered.

🏥 Spotlight: Vaiu AI
Emotionally Intelligent AI Medical Staff, Purpose-Built for Clinics
At Vaiu, we build voice AI agents specifically for healthcare facilities, with real-time emotion detection built into every patient interaction from the ground up, not bolted on as a reporting layer after the fact. Our agents do not just process what a patient says. They read the register beneath it: picking up on signals of anxiety, hesitation, comfort, or distress and adjusting responses accordingly in the moment, not in a post-call summary.

The platform runs a suite of specialised agents, each designed for a distinct clinical role. Sam handles appointment scheduling and specialist routing. Naomi manages medication and appointment reminders, with enough sensitivity to flag when a patient sounds uncertain about their next steps rather than just confirming they heard the information. Olivia handles 24/7 health guidance, responding to out-of-hours concerns with adaptive recommendations rather than scripted deflections. All of them report to a central intelligence layer that coordinates the full patient communication workflow, so nothing falls through the cracks between handoffs.

  1. 40%: No-show reduction at partner clinics
  2. 100%: Hold time eliminated at GreenMed Health Systems
  3. 15+: Languages supported across patient populations
  4. 24/7: Availability across all agent types

What makes the healthcare context different is the cost of getting it wrong. A missed emotional signal in a retail interaction might lose a sale.
In healthcare, it might mean a patient who does not come back, a medication schedule that quietly gets abandoned, or a worry that goes unaddressed because the interaction felt robotic when it needed to feel human. The platform is HIPAA compliant, SOC 2 Type II certified, and GDPR ready. In a sector this regulated, that is not a box-tick. It is a precondition for being taken seriously. The results across partner clinics, including DoctorCare247, CareWell Health Center, and Bright Horizons, point to the same pattern: when patients feel heard rather than processed, the downstream metrics follow.

I wish AI Agents just knew how I work without me explaining – so I made something that quietly observes me, learns and teaches it.

Every time I start a new Claude Code/OpenClaw/Codex session I find myself typing the same context. Here’s how I review PRs. Here’s my tone for client emails. Here’s why I pick this approach over that one. Claude just doesn’t have a way to learn these things from watching me actually do them.

So I built AgentHandover.

Mac menu bar app. Watches how you work, turns that into structured Skills, and makes them available to Claude or any agent that speaks MCP. Instead of explaining your workflow, the agent already has it. Your strategy, decision logic, guardrails, voice, which apps are required for different workflows and what to do in these apps, etc. All captured from your real behavior, your workflows end to end that you do on your Max. And it self-improves.

Two ways to use it.

Focus Record: hit record, do the task once, answer a couple clarifying questions, Skill generated. For stuff you know you want to hand over. “This is how I onboard a new client” or “this is my PR review process.”

Passive Discovery: let it run in the background. It watches your screen over days, figures out what’s work versus noise (activity classifier), clusters similar actions even across different days with interruptions, and after three or more observations synthesizes the pattern into a Skill. It found workflows I didn’t realize I had a system for. My Monday metrics routine. How I triage GitHub issues. Stuff I was doing on autopilot that I never would have written down.

The pipeline has 11 stages, all local. Screen capture with deduplication. A local VLM (Qwen 3.5 via Ollama, you can choose different model ofc) annotating every frame with what app you’re in, what you’re doing, what you’ll probably do next. Semantic embeddings to group similar workflows even when they look different on the surface. Cross-session linking so an interrupted task on Tuesday connects to when you finished it Thursday. Then behavioral synthesis that extracts not just steps but the why behind your decisions.

Output is a Skill file (+ knowledge base). Not a prompt, not a summary. A structured playbook with your strategy, steps, guardrails, and writing voice extracted from your own text. Each Skill has a confidence score that improves with every successful execution. If something goes wrong, the Skill adapts. (self-improving)

Safety: screenshots get deleted after. PII, API keys auto-redacted, etc.. Encrypted at rest. Zero telemetry. Nothing leaves your machine. Every Skill goes through lifecycle gates before any agent can touch it.

Pairs with Claude Code out of the box. Also OpenClaw, Codex, etc.

Repo: https://github.com/sandroandric/AgentHandover
Website: https://www.agenthandover.com/

If you’ve ever wished Claude just knew how you do things, that’s what this is for. Happy to answer anything. <3 and ofc credits to Claude Code for being my partner in crime.

Xoul – Local Personal Assistant Agent Release (Beta, v0.1.0-beta)

Xoul — An Open-Source AI Agent That Runs Locally

Introducing Xoul, a personal assistant agent powered by local LLMs and virtual machine isolation.

What Is Xoul

Xoul is a personal AI agent. It’s not a chatbot — it manages files, sends emails, browses the web, and runs code at the OS level. All actions run inside a QEMU virtual machine, keeping the host system untouched. When using a local LLM, personal data never leaves the machine.

Key Features

  • 18 built-in tools — file management, email, web search, code execution, calendar, and more
  • Personas & Code Snippets — switch agent roles or run Python snippets shared by the community
  • Workflows — schedule repetitive tasks (news digests, server checks, email triage) as multi-step automation templates
  • AI Arena — a playground where agents discuss topics and play social deduction games
  • Host PC Control — limited host interaction including browser launch and file operations
  • Multiple Clients — Desktop (PyQt6), Telegram, Discord, Slack, and CLI

Architecture

The Xoul agent runs inside a QEMU virtual machine. LLM inference is handled locally on the GPU via Ollama, while the desktop app serves as the host-side UI. VM isolation ensures the host system stays safe regardless of what the agent does.

Beyond local LLMs, Xoul also supports commercial APIs (Claude, GPT-5, Gemini, DeepSeek, Grok, Mistral) and external OpenAI-compatible servers (vLLM, LM Studio, etc.).

Supported Models

For local execution, models are automatically recommended based on available VRAM:

Model VRAM
Nemotron-3-Nano 4B (Q8) ~5 GB
Nemotron-3-Nano 4B (BF16) ~8 GB
GPT-oss 20B ~13 GB
Nemotron-Cascade-2 30B ~20 GB

BGE-M3 (embedding) and Qwen 2.5 3B (summarization, CPU-only) are also installed automatically.

System Requirements

Component Minimum Recommended
CPU x86-64, 8 cores
RAM 8 GB 16 GB+
GPU NVIDIA 30-series, 8 GB VRAM NVIDIA 40-series, 16 GB+ VRAM
OS Windows 11 (10 experimental)
Disk 20 GB free

Installation

Quick Start

  1. Download the release file
  2. Extract xoul_rel.zip
  3. Run install.bat inside the extracted folder

install.bat handles file placement, dependency installation, and configuration automatically. Python 3.12, Ollama, and QEMU are installed as needed. An interactive setup walks through language selection, LLM model, VM configuration, user profile, and optional service integrations (Gmail, Tavily, Telegram, etc.).

Install from Source

git clone https://github.com/xoul-project/xoul.git
cd xoul
.scriptssetup_env.ps1

Once setup completes, the Desktop App launches automatically. After that, you can start it with c:xouldesktopxoul.bat.

Community

Through Xoul Store, you can import workflows, personas, and code snippets created by other users with one click. You can also publish your own.

License

Released under the MIT License.

Links

  • Website: https://www.xoulai.net/
  • GitHub: https://github.com/xoul-project/xoul
  • Discussions: https://github.com/xoul-project/xoul/discussions

Kodee’s Kotlin Roundup: Kotlin 2.3.20, Interview With Josh Long, and More

March was a busy month for Kotlin, with a new language release, fresh tooling, ecosystem updates, and plenty of inspiration ahead of KotlinConf’26. From practical improvements to exciting steps in AI and multiplatform, there’s a lot worth exploring. Here are the stories that stood out to me most.

Where you can learn more

  • Workshops – KotlinConf 2026, May 20–22, Munich
  • Spring AI Kotlin Tutorials – Build AI-Powered Applications
  • Google Summer of Code 2026 Is Here: Contribute to Kotlin
  • Elevating AI-Assisted Android Development and Improving LLMs With Android Bench

YouTube highlights

  • Explicit Backing Fields are experimental in Kotlin 2.3
  • Kotlin Devs Diversify: Android Is 25% Now
  • How Major Metros Run on Kotlin Multiplatform | Talking Kotlin #145

Amper 0.10 – JDK Provisioning, a Maven Converter, Custom Compiler Plugins, and More

Amper 0.10.0 is out, and it brings a variety of new features, such as JDK provisioning, custom Kotlin compiler plugins, a Maven-to-Amper converter, and numerous IDE improvements! Read on for all of the details, and see the release notes for the full list of changes and bug fixes.

To get support for Amper’s latest features, use IntelliJ IDEA 2025.3.4  or IntelliJ IDEA 2026.1 (or newer). Make sure the latest version of the Amper plugin is installed. 

JDK provisioning

Amper needs a JDK (Java Development Kit) in order to perform various tasks in the project: compile Kotlin and Java sources, run tests, run JVM apps, etc.

Our philosophy is that you should be able to run your project without manually installing anything on your machine or having to configure anything. This is why Amper is able to provision a JDK automatically for you – JDK 21 by default. 

However, some projects require specific JDK versions. You can now specify the criteria for the necessary JDK in module.yaml, and Amper will download and install the matching JDK.

settings:
  jvm:
    jdk:
      version: 21 # major version
      distributions: [ zulu, temurin ] # acceptable distributions

Amper also takes the JAVA_HOME environment variable into account, since it is a common way to set the JDK to be used on the machine. You can read more about Amper’s JDK provisioning behavior in the documentation.

Maven converter and Maven plugin compatibility

If you have an existing Maven project, you don’t have to rewrite your build configuration from scratch. This release introduces a semi-automated conversion tool that reads your pom.xml files, including those in multi-module reactor projects, and generates the corresponding project.yaml and module.yaml files for you. To use it, simply run:

./amper tool convert-project

The converter maps your dependencies, BOMs, repositories, publishing coordinates, compiler flags, and other settings to their Amper equivalents. To support using both build systems during the transition, it sets layout: maven-like in every module so that your source directory structure, including src/main/java and src/main/kotlin, stays the same and no files need to be moved.

Well-known Maven plugins such as maven-compiler-plugin and spring-boot-maven-plugin are translated into built-in Amper settings. Other Maven plugins are added to the new mavenPlugins configuration section in module.yaml, and Amper can execute them during the build process through our Maven plugin compatibility layer.

The conversion is best-effort, so some projects may require tweaks afterward. For a full walkthrough and a list of limitations, see the documentation.

Kotlin compiler plugins

This release brings support for third-party Kotlin compiler plugins. Enabling this support is as easy as adding the following to module.yaml:

settings:
  kotlin:
    compilerPlugins:
      - id: org.example.my.plugin
        dependency: org.example:my-plugin:1.0.0
        options:
          myKey1: myValue1
          myKey2: myValue2

See the documentation for examples and how to enable IDE support for custom plugins.

We also added built-in support for the kotlinx.rpc and JsPlainObjects compiler plugins. 

IDE improvements

Reworked UX for running Amper commands

We’ve revisited the UI for creating and editing run configurations in the IDE. New custom views allow you to configure the options for run and test commands in a more convenient way:

Additionally, you can now create a configuration for any Amper command by choosing Amper in the Add New Configuration menu:

If you want to run a command in an ad-hoc way, you can use Run Anything (Ctrl+Ctrl) and prepend your command with amper:

Run gutters for native applications in module.yaml

Native applications (linux/app, macos/app, windows/app) can now be run from the IDE via the gutter:

Better test names in the Test tool window

The @DisplayName and @ParameterizedTest.name JUnit 5 annotations are now respected in the Test tool window when showing the test execution tree.

@ParameterizedTest(name = "Test #{0}")
@DisplayName("My parameterized test")
@ValueSource(ints = [1, 2, 3])
fun parameterized(i: Int) {}

Ktor plugin assistance

If your module has the Ktor server dependency, the module.yaml file provides support for searching and adding plugins via the Add Plugins… inlay:

Alternatively, you can use completion in the Kotlin code, which will add all the necessary dependencies to the module without you even having to touch the module.yaml file:

Support for profiling JVM applications

Note: This feature requires IntelliJ IDEA Ultimate.

The configuration for the run command in jvm/app modules can now be run using IntelliJ IDEA’s support for profilers:

Amper plugin development

The previous release of Amper brought the preview of Amper’s extensibility system. We received a lot of feedback, and we are working on extending the capabilities of plugins. While the ability to publish and share plugins is still a work in progress, a valuable improvement is already available in this release: you can now reference the module settings from the plugin using ${module.settings} in plugin.yaml:

Other improvements

Starting with version 0.10, Amper supports Maven profiles declared in the POM files of transitive dependencies.

In this release, we’ve also introduced the ability to add module descriptions in module.yaml. The description is formatted in Markdown and can occupy multiple lines. This text is used by the ./amper show modules command in the CLI, as well as by the IDE to show information about the module. For libraries, it is also used as a description in published metadata by default.

Updated default versions

We updated some of our default versions for toolchains and frameworks:

  • Kotlin: 2.3.20
  • Android minimum SDK: 23
  • Compose: 1.10.3
  • KSP: 2.3.6
  • Ktor: 3.4.1
  • Spring Boot: 4.0.5

Try Amper 0.10.0

To update an existing project, use the ./amper update command.

To get started with Amper, check out our Getting started guide. Take a look at some examples, follow a tutorial, or read the comprehensive user guide depending on your learning style.

Try Amper

Share your feedback

Amper is still experimental and under active development. You can provide feedback about your experience by joining the discussion in the Kotlinlang Slack’s #amper channel or sharing your suggestions and ideas in a YouTrack issue. Your input and use cases help shape the future of Amper!

Profile .NET Apps Without Restarting: Monitoring Comes to ReSharper

Tracking down performance bottlenecks in Visual Studio often means interrupting your workflow, restarting your application in profiling mode, and hoping you can reliably reproduce the exact issue. We think there’s a better way.

If you have already used Monitoring in Rider, this experience will feel familiar. Now, the same Monitoring experience is available in ReSharper, bringing real-time performance insights directly inside Visual Studio.

Try it in ReSharper 2026.1

Curious about what else is included in this update? Head over to the What’s New in ReSharper page to explore all the other improvements that have landed in the latest release.

What is Monitoring?

When you run or debug your app, ReSharper automatically opens the Monitoring tool window and shows you what’s happening in real time: CPU and memory usage, GC activity, counters, metrics, and so on.

The best part: profiling without restart

> Important: This feature requires a dotUltimate license.

The most powerful part of Monitoring is what happens after you notice a problem.

Monitoring does some profiling work in the background, so you can select a time range directly on the chart and open it in the built-in profiler. That means you do not need to stop the application, restart it in profiling mode, and try to reproduce the problem. You can go straight from “I see a spike here” to “show me the Call Tree for that exact interval”. In other words, you can open the selected range in the built-in profiler and inspect the collected data in detail: the call tree, call time, and related runtime events.

Automatic issue detection

Monitoring is not limited to charts. It can also automatically detect issues and list them below the timeline. Currently supported issue types include:

  • Performance hotspot
  • ASP.NET Core issues
    • Slow MVC action
    • Slow Razor page handler
    • Slow Razor view component
  • Database issues
    • Slow DB command
    • Excessive DB commands
    • Large DB result set
    • Excessive DB connections

These issues appear as the app runs, so you can catch bottlenecks while they occur rather than waiting until after a profiling session or checking logs.

Analyze issues in place

> Important: This feature requires a dotUltimate license.

The issue list is not just a report. It is also a starting point for investigation.

When Monitoring detects a problem, you can select that issue and analyze the corresponding time range in the built-in profiler. That gives you the same advantage as manual interval selection, but now the interesting intervals are found for you automatically.

Counters, metrics, and environment data

You can also use Monitoring as a live runtime dashboard. It includes tabs for counters, metrics, and environment data. This is especially handy when you want a single place to monitor both low-level runtime behavior and higher-level application signals during local development.

How to enable Monitoring in ReSharper

Monitoring is designed to be available by default. It starts automatically when you run or debug your project. If you want, you can change that behavior and keep Monitoring enabled only for debug sessions or turn it off completely. 

A simpler path from symptom to root cause

What makes Monitoring valuable is not any single chart or issue detector on its own. It is the workflow:

  1. You run the app.
  2. You notice a spike, slowdown, or detected issue.
  3. You select the interesting interval.
  4. You open it in the built-in profiler.
  5. You inspect the call tree and find the cause.

We are happy to bring Monitoring to ReSharper and make this runtime investigation workflow available in Visual Studio, as well.

Give it a try in the 2026.1 release, and as always, we would love to hear what works well, what is missing, and what you would like us to improve next.

Try it in ReSharper 2026.1

WCAG 2.2: What Changed, Why It Matters, and How to Implement It

Nine new success criteria. One removed. Here is what every frontend engineer needs to know.

WCAG 2.2 became an official W3C Recommendation on December 12, 2024. If your team is still targeting 2.1 as a compliance baseline, you are already behind. The W3C explicitly advises using 2.2 to maximize future applicability of accessibility efforts, and regulators in the EU, UK, and US are actively aligning their policies to the latest version.

This article covers every new success criterion using a consistent format: what the spec requires, why the criterion exists and who it protects, and how to implement it in practice.

What Was Removed First: 4.1.1 Parsing

Before the new criteria, one was cut. WCAG 2.2 removed 4.1.1 Parsing, which previously required well-formed HTML so assistive technologies could reliably parse it.

Why removed: Modern browsers and screen readers have become resilient enough to handle malformed markup without accessibility failures. The criterion no longer reliably predicted real-world accessibility outcomes, so the working group dropped it.

Practical note: If your organization is contractually obligated to WCAG 2.0 or 2.1 conformance, you may still need to test and report on 4.1.1 separately. For new 2.2 audits, it is gone.

The 9 New Success Criteria

1. Focus Not Obscured (Minimum) — 2.4.11 — Level AA

What: When a UI component receives keyboard focus, the focused element must not be entirely hidden by author-created content. Partially obscured is acceptable at this level. Entirely hidden is not.

Why: Users who navigate by keyboard (people with motor disabilities, switch device users, power users) need to see where focus is at all times. Sticky headers, floating cookie banners, fixed chat widgets, and bottom navigation bars are the most common offenders. When focus moves behind one of these layers and disappears completely, the user loses their place on the page with no visual cue for what is currently selected. This is especially disorienting for users with cognitive disabilities who are more sensitive to context loss during a task.

How to implement:

The core fix is ensuring scroll-padding-top accounts for any fixed header height so the browser scrolls enough to keep focused elements visible.

/* If your sticky header is 64px tall */
html {
  scroll-padding-top: 80px; /* header height + breathing room */
}

/* Alternatively, scoped to focusable elements */
a:focus,
button:focus,
[tabindex]:focus {
  scroll-margin-top: 80px;
}

For dynamic header heights (collapsing navs, announcement banners that appear after load), update the value from JavaScript:

function updateScrollPadding() {
  const header = document.querySelector('.sticky-header');
  const height = header?.getBoundingClientRect().height ?? 0;
  document.documentElement.style.scrollPaddingTop = `${height + 16}px`;
}

window.addEventListener('resize', updateScrollPadding);
updateScrollPadding();

Test it: Tab through your page with a sticky header visible. Every focused element should be at least partially visible above the fold.

2. Focus Not Obscured (Enhanced) — 2.4.12 — Level AAA

What: Same intent as 2.4.11, but stricter. The focused component must not be obscured at all, not even partially.

Why: At AA (2.4.11), a focused element that is 10% visible technically passes. For users with low vision who rely on high zoom levels or screen magnification, even partial obscuring can make the focus indicator undetectable in practice. The AAA version closes that gap entirely.

How to implement:

Everything from 2.4.11 applies. The additional requirement is that no part of the focused element is covered by overlapping author-created content. In practice this means:

  • scroll-padding values must fully clear the focused element above any sticky layers.
  • Fixed overlays (modals, drawers, sheets) must trap focus inside themselves while open, so keyboard focus can never land on content behind them.
// Trap focus inside an open modal
function trapFocus(modalElement) {
  const focusable = modalElement.querySelectorAll(
    'a, button, input, textarea, select, [tabindex]:not([tabindex="-1"])'
  );
  const first = focusable[0];
  const last = focusable[focusable.length - 1];

  modalElement.addEventListener('keydown', (e) => {
    if (e.key !== 'Tab') return;
    if (e.shiftKey) {
      if (document.activeElement === first) {
        e.preventDefault();
        last.focus();
      }
    } else {
      if (document.activeElement === last) {
        e.preventDefault();
        first.focus();
      }
    }
  });
}

3. Focus Appearance — 2.4.13 — Level AAA

What: When a keyboard focus indicator is visible, it must meet specific size and contrast requirements. The focus indicator area must be at least the perimeter of the unfocused component multiplied by 2 CSS pixels. The contrast ratio between focused and unfocused states must be at least 3:1 against adjacent colors.

Why: Browser default focus outlines are frequently invisible against common backgrounds, and many codebases globally suppress them with outline: none (still a widespread anti-pattern). Users with low vision, cognitive disabilities, and anyone relying entirely on keyboard navigation depend on a focus indicator that is visually obvious, not just technically present. A faint, thin blue ring at low contrast does not serve these users.

How to implement:

The first step is removing the global outline: none pattern. If you need to suppress the browser ring for mouse users, use :focus-visible instead of :focus:

/* Wrong: removes focus ring for everyone, including keyboard users */
*:focus {
  outline: none;
}

/* Right: removes ring only when pointer (not keyboard) is in use */
*:focus:not(:focus-visible) {
  outline: none;
}

/* Custom focus indicator that satisfies AAA geometric and contrast requirements */
*:focus-visible {
  outline: 3px solid #0f62fe;
  outline-offset: 2px;
  border-radius: 2px;
}

A practical shortcut: a 3px solid outline in a color with at least 3:1 contrast against the surrounding background satisfies the geometric requirement for most standard interactive components. For components on dark surfaces, check the contrast of your focus color against the dark background, not just the default page background.

4. Dragging Movements — 2.5.7 — Level AA

What: Any functionality that uses a dragging movement (click-and-drag, touch drag) must also be achievable with a single pointer action (click or tap) without dragging. Exceptions apply only when the drag is essential to the functionality itself.

Why: Dragging requires simultaneously pressing, holding, and moving a pointer. This compound gesture is unreliable or impossible for users with hand tremors, limited fine motor control, or motor disabilities affecting pointer precision. Sortable lists, kanban boards, sliders, map pan gestures, and date range pickers are common failure cases. The criterion does not prohibit drag interactions. It requires that a non-drag path exists to accomplish the same result.

How to implement:

For sortable lists, provide explicit move buttons alongside the drag handle:

function SortableItem({ item, onMoveUp, onMoveDown }) {
  return (
    <div draggable onDragStart={...} onDragEnd={...}>
      <span>{item.label}</span>
      <button aria-label={`Move ${item.label} up`} onClick={onMoveUp}></button>
      <button aria-label={`Move ${item.label} down`} onClick={onMoveDown}></button>
    </div>
  );
}

For range sliders, use native <input type="range"> wherever possible. It supports arrow key adjustment out of the box. Custom slider implementations frequently break keyboard support:

<input
  type="range"
  min={0}
  max={100}
  value={value}
  onChange={(e) => setValue(Number(e.target.value))}
  aria-label="Price range maximum"
/>

For map or canvas drag interactions, provide explicit pan controls: arrow-key panning and clickable pan buttons in the UI.

5. Target Size (Minimum) — 2.5.8 — Level AA

What: The size of the pointer target for interactive elements must be at least 24×24 CSS pixels. Exceptions apply when: the target’s offset from adjacent targets is at least 24px, the target is inline within text content, the browser controls the target size (default form controls), or a small size is essential to the information conveyed.

Why: Small tap targets fail users with tremors, limited dexterity, or motor disabilities who use alternative pointer devices with reduced precision. Tightly packed icon buttons, small checkboxes, link-dense navigation menus, and close buttons in notification toasts are the most common failure patterns. Note that 24x24px is the AA minimum. The AAA version (2.5.5, carried forward from 2.1) requires 44x44px. Most mobile UX guidelines already recommend 44px. WCAG 2.2 establishes the legal floor.

How to implement:

Set a baseline minimum for all interactive elements:

button,
a,
[role="button"],
input[type="checkbox"],
input[type="radio"] {
  min-width: 24px;
  min-height: 24px;
}

/* Prefer the AAA-level 44x44 on touch interfaces */
@media (pointer: coarse) {
  button,
  a,
  [role="button"] {
    min-width: 44px;
    min-height: 44px;
  }
}

For icon-only buttons where the visual size is constrained by design, expand the hit area using padding while keeping the visible footprint the same:

.icon-button {
  padding: 10px; /* expands hit area to 44x44 if icon is 24x24 */
  display: inline-flex;
  align-items: center;
  justify-content: center;
}

The spacing exception is a legitimate tool for constrained layouts. If two 16×16 icons are spaced so their center-to-center distance is 24px or more, they satisfy the minimum even without being 24×24 in physical size. Use it as a fallback, not a design default.

6. Consistent Help — 3.2.6 — Level A

What: If a web page provides a help mechanism (human contact, self-help documentation, automated contact, or a contact form), that mechanism must appear in the same relative location across all pages within the site.

Why: Users with cognitive disabilities often need help completing tasks and struggle when support resources appear in different places on different pages. If the help icon is in the top-right corner on the homepage but shifts to the footer on the checkout page, the inconsistency creates friction precisely when the user is most likely to need assistance. This criterion does not require that you have a help mechanism. It only requires that if you do, its location is stable.

How to implement:

This is primarily a layout and design system decision. Anchor help mechanisms inside a shared layout component so they cannot drift between pages:

// Layout.jsx
export function Layout({ children }) {
  return (
    <>
      <GlobalHeader />     {/* help trigger lives here, always */}
      <main>{children}</main>
      <GlobalFooter />
    </>
  );
}

Avoid conditionally hiding the help trigger on specific page types. If suppression is unavoidable (full-screen checkout flows, immersive experiences), make sure the mechanism reappears in the same location once normal layout resumes.

“Same relative location” means the same area of the page (top-right, bottom-right, etc.), not exact pixel coordinates. Responsive layouts that shift the help button between breakpoints are acceptable as long as it is consistently placed within each breakpoint’s layout pattern.

7. Redundant Entry — 3.3.7 — Level A

What: Information that a user has already provided in a multi-step process must either be auto-populated in subsequent steps or be selectable from previously entered values. Users must not be required to re-enter the same information within the same session unless re-entry is essential (e.g., password confirmation for security) or the information is no longer valid.

Why: Re-entering data is a significant cognitive and motor burden. For users with cognitive disabilities, being asked to retype a name or address they entered three steps ago interrupts task flow, increases error likelihood, and often causes abandonment. For users with motor disabilities, every additional keystroke carries a real physical cost. This criterion formalizes what good UX already recommends: do not ask for something you already have.

How to implement:

In a multi-step React form, store session state at a high level and pre-populate later steps:

// FormContext.jsx
const FormContext = React.createContext({});

export function FormProvider({ children }) {
  const [formData, setFormData] = React.useState({});

  const updateFormData = (values) => {
    setFormData((prev) => ({ ...prev, ...values }));
  };

  return (
    <FormContext.Provider value={{ formData, updateFormData }}>
      {children}
    </FormContext.Provider>
  );
}

// Step 3: Shipping -- pre-populate from billing when same
function ShippingStep() {
  const { formData, updateFormData } = React.useContext(FormContext);
  const [sameAsBilling, setSameAsBilling] = React.useState(false);

  const address = sameAsBilling
    ? formData.billingAddress
    : formData.shippingAddress;

  return (
    <>
      <label>
        <input
          type="checkbox"
          checked={sameAsBilling}
          onChange={(e) => setSameAsBilling(e.target.checked)}
        />
        Same as billing address
      </label>
      <AddressFields defaultValues={address} onChange={...} />
    </>
  );
}

The “same as billing” pattern already present in most e-commerce checkouts is a textbook 3.3.7 implementation. Apply the same logic to any multi-step flow where information asked in step N could have been collected in step N-1 or earlier.

8. Accessible Authentication (Minimum) — 3.3.8 — Level AA

What: A cognitive function test (memorizing a password, solving a puzzle, transcribing characters) must not be required at any step of an authentication process unless: an alternative authentication method is available that does not require a cognitive function test, a mechanism is available to help complete the test (such as copy-paste support or a password manager), or the test involves recognizing objects or personal content the user themselves provided.

Why: Password recall is itself a cognitive function test. Many users with cognitive disabilities, memory impairments, or learning disabilities cannot reliably memorize and recall complex passwords on demand. CAPTCHAs add another cognitive or visual puzzle on top of that. This criterion protects access to the authentication layer itself, which is a prerequisite for using everything else on the platform.

How to implement:

The highest-impact single change: allow paste into password fields and respect autocomplete attributes. Blocking paste breaks password managers and forces manual re-entry.

// Wrong: blocks paste, breaks password managers
<input
  type="password"
  onPaste={(e) => e.preventDefault()}
/>

// Right: paste allowed, autocomplete declared
<input
  type="password"
  autoComplete="current-password"
/>

Use the correct autocomplete values for the browser and password managers to fill credentials automatically:

<input type="email" autocomplete="username" />
<input type="password" autocomplete="current-password" />
<input type="password" autocomplete="new-password" /> <!-- registration -->

Additional paths to compliance:

  • Offer magic link login (no password to recall).
  • Support passkeys as an alternative.
  • If you use a CAPTCHA, provide an audio alternative and a non-CAPTCHA path for users who cannot complete visual challenges.

The object recognition exception covers CAPTCHAs that ask users to identify images they uploaded themselves (security images, personal photos). These are permitted because the cognitive anchor is personal memory, not abstract recall.

9. Accessible Authentication (Enhanced) — 3.3.9 — Level AAA

What: Same as 3.3.8, but removes the object recognition and personal content exceptions. No cognitive function test of any kind is permitted anywhere in the authentication flow.

Why: Even object recognition and personal image selection require a level of memory and visual processing that users with severe cognitive or visual disabilities may not be able to reliably perform. The AAA version is an absolute requirement: authentication cannot depend on cognitive tests at all.

How to implement:

The only conformant paths at AAA are authentication methods that require no cognitive recall:

Passkeys (WebAuthn/FIDO2): device-level biometric or PIN-based auth. No password to memorize or recall.

// Passkey authentication
const assertion = await navigator.credentials.get({
  publicKey: {
    challenge: serverGeneratedChallenge,
    allowCredentials: [{ type: 'public-key', id: existingCredentialId }],
    userVerification: 'preferred',
  },
});
// Send assertion to server for verification

Magic links: one-time login URLs delivered to a verified email or phone. The user clicks a link in their inbox. No password involved.

SSO delegation: the authentication burden is delegated to a trusted identity provider. The provider’s own authentication is outside your conformance boundary.

AAA is not required for most products, but passkeys are rapidly becoming the industry default regardless of compliance requirements. Implementing them satisfies the accessibility requirement and the general trend toward passwordless authentication simultaneously.

Audit Checklist for WCAG 2.2 Compliance

If you are auditing an existing product, prioritize in this order:

Level A (minimum baseline)

  • [ ] Help mechanisms appear in a consistent location across all pages (3.2.6)
  • [ ] Multi-step forms do not re-ask for information already collected in the session (3.3.7)

Level AA (legal and enterprise standard)

  • [ ] No focused element is entirely hidden by sticky headers, footers, or overlays (2.4.11)
  • [ ] All interactive targets are at least 24×24 CSS pixels or have adequate spacing (2.5.8)
  • [ ] Every drag interaction has a single-pointer alternative (2.5.7)
  • [ ] Password fields allow paste and declare correct autocomplete attributes (3.3.8)
  • [ ] No authentication step requires a cognitive function test without an accessible alternative (3.3.8)

Level AAA (aspirational or contractual)

  • [ ] No focused element is partially obscured by author-created overlays (2.4.12)
  • [ ] Focus indicators meet minimum size and 3:1 contrast requirements (2.4.13)
  • [ ] Authentication requires no cognitive function tests of any kind (3.3.9)

The Bigger Picture

WCAG 2.2’s additions are tightly scoped around three user groups: people with cognitive or learning disabilities, users with low vision, and users on mobile and touch devices. Every new criterion maps to a failure mode that real products ship regularly: password fields that block paste, drag interactions with no keyboard fallback, sticky headers that swallow focused elements, icon buttons too small to tap precisely.

None of these fixes are expensive once you know what to look for. The authentication changes are often one autocomplete attribute away. The target size and focus visibility issues are a few lines of CSS. The redundant entry problem is a state management question you have probably already partially solved elsewhere in your codebase.

The investment is low. The user impact is not.

Questions about implementing any of these criteria? Drop them in the comments.