I Built a YouTube Dislike Viewer with Next.js 16 β€” Here’s How

I Built a YouTube Dislike Viewer with Next.js 16

Ever since YouTube removed the public dislike count, it’s been harder to judge video quality at a glance. So I built a
simple tool to bring it back.

Live site: https://www.youtubedislikeviewer.shop

What It Does

Paste any YouTube URL or video ID, and instantly see:

  • πŸ‘ Likes & πŸ‘Ž Dislikes
  • πŸ“Š Like/Dislike ratio bar
  • πŸ‘ View count & rating
  • πŸ• Search history (stored locally)

Tech Stack

  • Next.js 16 (App Router + Turbopack)
  • React 19 + TypeScript 5.9
  • Tailwind CSS 4
  • Return YouTube Dislike API (community-driven dislike data)

Key Design Decisions

API Proxy with Caching

Instead of calling the dislike API directly from the client, I set up a server-side proxy route at /api/dislike. This
lets me add cache headers (s-maxage=300, stale-while-revalidate=600) so repeated lookups for the same video are fast
and cheap.

// app/api/dislike/route.ts (simplified)
export async function GET(request: Request) {
const videoId = new URL(request.url).searchParams.get(‘videoId’);
const res = await fetch(
https://returnyoutubedislikeapi.com/votes?videoId=${videoId}
);
const data = await res.json();
return Response.json(data, {
headers: {
‘Cache-Control’: ‘public, s-maxage=300, stale-while-revalidate=600’,
},
});
}

Robust URL Parsing

YouTube URLs come in many flavors β€” youtube.com/watch?v=, youtu.be/, /embed/, or just a raw 11-character ID. A single
extraction function handles all of them with regex.

Thumbnail Fallback

Not every video has a maxresdefault thumbnail. The component tries the highest quality first and falls back to
hqdefault on error β€” no broken images.

What I Learned

  1. Next.js 16 App Router is mature and pleasant to work with. The server/client component split feels natural once you
    get used to it.
  2. Caching at the edge with simple Cache-Control headers goes a long way β€” no Redis needed for a project this size.
  3. localStorage is still a perfectly fine solution for lightweight client-side persistence like search history.

Try It Out

Check it out at https://www.youtubedislikeviewer.shop and let me know what you think!

Python vs. a Modern BASIC Interpreter: When the β€œToy Language” Actually Wins

I like Python. A lot, actually. It is hard to argue against it: Python dominates data science, AI, and a large part of backend development.

But I am also a C++ developer, and over the last year I built my own interpreter: jdBasic.

Not as a Python replacement, but as an experiment in a different direction.

The question behind it was simple:

What happens if you design a language and an interpreter purely around reducing friction during everyday development and experimentation?

This article is not about hype. It is about very practical trade-offs.

Motivation: The Hidden Cost of β€œJust Import a Library”

Python’s strength is its ecosystem. At the same time, that ecosystem introduces a constant overhead:

  • virtual environments
  • package management
  • imports for even small tasks
  • tooling setup before you can actually think

That cost is usually acceptable. Sometimes it is not.

jdBasic explores a different idea:

Bake common, complex operations directly into the language instead of outsourcing them to libraries.

1. The “Always-Open” Utility Console

One of the most common interruptions in development is context switching.

  • open a terminal
  • start a REPL
  • import something
  • run a quick check
  • close everything again

jdBasic is small and fast enough that I simply keep it open all day. It works like a scratchpad.

Need to check if a text block fits in a 500-char database field?

? LEN("paste the massive string here...")

Quick hex arithmetic:

? $FF * 2

No setup, no imports, no ceremony.

2. Unique Random Numbers Without Loops

Generating something like β€œ6 out of 49” sounds trivial, but in many languages it immediately leads to loops or helper libraries.

jdBasic supports APL-style vector operations.
That means you can treat numbers like a deck of cards:

  1. generate 1..49
  2. shuffle the sequence
  3. take the first 6 elements
' IOTA(49, 1) generates 1..49, SHUFFLE randomizes, TAKE gets the first 6
PRINT TAKE(6, SHUFFLE(IOTA(49, 1)))
' Output: [12 4 49 33 1 18]

No loops, no duplicate checks, no additional code.

3. ASCII Visualization in a Single Statement

In Python, visualization usually means installing and configuring libraries like matplotlib or pandas.

jdBasic takes a different approach:
2D arrays and string matrices are native data types.

This makes it possible to calculate and render ASCII charts directly in the console.

The following example is a complete biorhythm chart (physical, emotional, intellectual) rendered in one statement:

' One-liner Biorhythm Chart
BD="1967-10-21":W=41:H=21:D=DATEDIFF("D",CVDATE(BD),NOW()):X=D-W/2+IOTA(W):C=RESHAPE([" "],[H,W]):PY=INT((SIN(2*PI*X/23)+1)*((H-1)/2)):EY=INT((SIN(2*PI*X/28)+1)*((H-1)/2)):IY=INT((SIN(2*PI*X/33)+1)*((H-1)/2)):C[H/2,IOTA(W)-1]="-":C[IOTA(H)-1,INT(W/2)]="|":C[PY,IOTA(W)-1]="P":C[EY,IOTA(W)-1]="E":C[IY,IOTA(W)-1]="I":PRINT "Biorhythm for " + BD:PRINT FRMV$(C)

It calculates the sine waves for three different cycles, maps them to a 2D grid, overlays axes, and prints the resultβ€”all without a single external library or loop.

It returns to the roots of the 8-bit era: the computer is a calculator that is always ready.

4. Persistent Workspaces

In Python, a REPL session is temporary.
Once you close it, everything is gone unless you explicitly serialize your state.

jdBasic reintroduces an old idea: persistent workspaces.

SAVEWS "debugging_session"

The next morning, I open my console and type:

LOADWS "debugging_session"

Variables, functions, objects, history – everything is restored.
I use this constantly for long-running investigations:

  • one workspace for database analysis
  • one for AI experiments
  • one for automation tasks

5. Working with Corporate Data (MS Access, ADODB)

Enterprise environments often come with β€œunfashionable” data sources.
MS Access databases are a good example.

In Python, this usually means:

  • ODBC drivers
  • pyodbc
  • platform-specific setup

jdBasic uses COM and ADODB directly, without external libraries.

I keep a workspace called “SQL_Console” that contains a simple 150-line script (acct.jdb). It wraps the complexity of ADO into a simple REPL.

The Implementation (Simplified):

' Connect to Access without external libraries
conn = CREATEOBJECT("ADODB.Connection")
conn.Open("Provider=Microsoft.ACE.OLEDB.12.0;Data Source=Orders.accdb")

' Execute SQL and get results
rs = CREATEOBJECT("ADODB.Recordset")
rs.Open("SELECT * FROM Orders", conn)

' Print results
DO WHILE NOT rs.EOF
    PRINT rs.Fields("Customer").Value; " | "; rs.Fields("Amount").Value
    rs.MoveNext()
LOOP

Because I save this in a workspace, I never re-type the connection string. I just load SQL_Console and fire queries:

ExecuteSQL "SELECT * FROM Users WHERE ID=5"

It returns a formatted Map or Array immediately.
Because this lives in a workspace, I never reconfigure it.
I just load it and run queries.

6. Desktop Automation: Controlling Windows Native Apps

Automating Windows applications used to be easy in the VB6 era.
In Python, it is possible, but often feels bolted on.

In jdBasic, COM automation is a core language feature.

' Launching Microsoft Word...
wordApp = CREATEOBJECT("Word.Application")
wordApp.Visible = TRUE
doc = wordApp.Documents.Add()

' Select text and format it via COM
sel = wordApp.Selection
sel.Font.Name = "Segoe UI"
sel.Style = -2 ' wdStyleHeading1
sel.TypeText "My Generated Doc"

7. Vector Math Without NumPy

In Python, element-wise math requires NumPy.

Python:

import numpy as np
V = np.array([10, 20, 30, 40])
print(V * 2)

jdBasic:
In jdBasic, arrays are first-class citizens. The interpreter knows how to handle math on collections natively.

V = [10, 20, 30, 40]
PRINT "V * 2: "; V * 2
' Output: [20 40 60 80]

There are no imports. No pip install. The interpreter understands what you mean.

8. Learning AI with Transparent Tensors

For production AI, Python frameworks are unbeatable.

For learning how things actually work internally, they can be opaque.

jdBasic has a built-in Tensor type with automatic differentiation.
Gradients are explicit and inspectable.

' Built-in Autodiff
A = TENSOR.FROM([[1, 2], [3, 4]])
B = TENSOR.FROM([[5, 6], [7, 8]])

' Matrix Multiplication
C = TENSOR.MATMUL(A, B)

' Calculate Gradients
TENSOR.BACKWARD C
PRINT "Gradient of A:"; TENSOR.TOARRAY(A.grad)

This is not about performance.
It is about understanding.

9. Micro-Services Without a Framework

In Python, even a small HTTP API usually involves a framework.

jdBasic includes an HTTP server as a language feature.

FUNC HandleApi(request)
  response = {
    "status": "ok",
    "server_time": NOW()
  }
  RETURN response
ENDFUNC

HTTP.SERVER.ON_POST "/api/info", "HandleApi"
HTTP.SERVER.START(8080)

Define a function, return a map, get JSON.

The Verdict: Ecosystem vs. Immediacy

This is not a β€œPython vs. BASIC” argument.
It is about choosing where you want to pay the cost.

Feature jdBasic Python
Session State Native Manual
Vector Math Built-in NumPy
Automation Native COM / ADODB support. pyodbc / pywin32.
Setup Single executable Virtual envs, pip, package management.

For production systems and large teams: Python is the right choice.

For an always-on, persistent, low-friction development console:
sometimes, my jdBasic interpreter is surprisingly effective.

Basic never really disappeared πŸ™‚

You can explore the interpreter’s source code, dive into the documentation, and see more examples over at the official GitHub repository: jdBasic

You can try the main features online at: jdBasic

Claude Sync: Sync Your Claude Code Sessions Across All Your Devices Simplified

If you use Claude Code (Anthropic’s official CLI), you’ve probably experienced this frustration:

You’re deep into a coding session on your work laptop. Claude remembers your project context, your preferences, your conversation history. Everything is flowing perfectly.

Then you switch to your personal MacBook… and it’s all gone.

Claude doesn’t know what you were working on. Your custom agents? Gone. Your project memory? Vanished. You have to start from scratch.

I built Claude Sync to fix this.

What is Claude Sync?

Claude Sync is an open-source CLI tool that synchronizes your ~/.claude directory across devices using encrypted cloud storage.

Key Features:

  • πŸ” End-to-end encryption – Files encrypted with age before upload
  • πŸ”‘ Passphrase-based keys – Same passphrase = same key on any device
  • ☁️ Multi-cloud support – Cloudflare R2, AWS S3, or Google Cloud Storage
  • πŸ†“ Free tier friendly – Works within free storage limits
  • ⚑ Simple CLI – Just push and pull
# That's literally it
claude-sync push   # Upload changes
claude-sync pull   # Download changes

What Gets Synced?

Everything Claude Code stores locally:

What Why It Matters
projects/ Session files, auto-memory for each project
history.jsonl Your command history
agents/ Custom agents you’ve created
skills/ Custom skills
plugins/ Installed plugins
rules/ Custom rules
settings.json Your preferences
CLAUDE.md Global instructions for Claude

Quick Start Guide

Install Claude Sync

Choose your preferred method:

# npm (recommended - works everywhere)
npm install -g @tawandotorg/claude-sync

# Or use npx for one-time use
npx @tawandotorg/claude-sync init

Daily Workflow

Once set up, your workflow is simple:

# Start of day (or when switching devices)
claude-sync pull

# ... use Claude Code normally ...

# End of day (or before switching devices)
claude-sync push

Pro Tip: Automate It

Add to your ~/.zshrc or ~/.bashrc:

# Auto-pull on shell start
if command -v claude-sync &> /dev/null; then
  claude-sync pull -q &
fi

# Auto-push on shell exit
trap 'claude-sync push -q' EXIT

Get Started

npm install -g @tawandotorg/claude-sync
claude-sync init
claude-sync push

GitHub: github.com/tawanorg/claude-sync

Documentation: tawanorg.github.io/claude-sync

Feedback Welcome!

This is an open-source project. If you:

  • Find bugs πŸ›
  • Have feature ideas πŸ’‘
  • Want to contribute 🀝

Open an issue or PR on GitHub!

Have you struggled with syncing Claude Code across devices? What solutions have you tried? Let me know in the comments!

Copilot vs Cursor vs Cody 2026: AI Coding Compared

“I’m already paying for one AI coding tool. Should I switch?”

This is the question I keep hearing from developers. GitHub Copilot was first. Cursor disrupted everything. And Cody quietly became the best option nobody talks about.

After using all three on real projects β€” React apps, Python backends, infrastructure scripts β€” I have strong opinions.

Spoiler: The best choice depends on one question: How do you work?

TL;DR β€” The Quick Verdict

For staying in your current IDE: GitHub Copilot wins. Works everywhere.

For maximum AI power: Cursor wins. Agent mode is unmatched.

For large codebase understanding: Cody wins. Sourcegraph’s search is killer.

For price-conscious developers: Copilot Free. 12,000 completions/month free.

For teams on GitHub: Copilot. Native integration matters.

My pick: Cursor for serious development work, Copilot Free as my backup when I’m on a random machine. Cody if you work on massive enterprise codebases.

Quick Comparison (2026)

Feature GitHub Copilot Cursor Cody
Price Free / $10-19/mo $20/mo Free / $9/mo / Enterprise
IDE Any (extension) Own IDE (VS Code fork) VS Code, JetBrains, Neovim
Agent Mode ❌ Limited βœ… Full agent ⚠️ Growing
Multi-file Edits βœ… Copilot Edits βœ… Composer βœ… Edit mode
Codebase Context Good ⭐ Excellent ⭐ Excellent (Sourcegraph)
Model Choice GPT-4o, Claude 3.5, o1 Claude Sonnet 4, GPT-4o Claude, GPT, Gemini
Free Tier ⭐ 12,000 completions/mo 2,000 completions 200 chats, unlimited autocomplete
Best For Everyone, especially GitHub users Power users, full-stack devs Enterprise, large codebases

The 2026 Landscape: What’s Changed

The AI coding space has exploded. Here’s what matters:

GitHub Copilot’s big moves:

  • Free tier launched β€” 12,000 completions/month, limited chat
  • Model choice β€” Pick between GPT-4o, Claude 3.5 Sonnet, or o1
  • Copilot Edits β€” Multi-file editing (finally)
  • Workspace Agent β€” Better codebase understanding
  • Still works everywhere: VS Code, JetBrains, Neovim, Visual Studio

Cursor’s rise:

  • $9 billion valuation β€” Not a toy anymore
  • Agent mode β€” Run commands, modify files, fix its own errors
  • Composer β€” Multi-file generation that actually works
  • Tab prediction β€” Predicts where you’ll edit next
  • Became the default recommendation for “best AI coding tool”

Cody’s quiet revolution:

  • Sourcegraph integration β€” Unmatched codebase understanding
  • Generous free tier β€” Unlimited autocomplete, 200 chats/month
  • Multiple LLMs β€” Claude 3.5, GPT-4o, Gemini 2.0 Flash
  • Pro tier at $9/month β€” Cheapest paid option
  • Works in VS Code, JetBrains, Neovim (not locked to one IDE)

Where GitHub Copilot Wins πŸ†

1. It Works Everywhere

This is Copilot’s superpower. It’s an extension, not an IDE.

Use it in:

  • VS Code
  • JetBrains (IntelliJ, PyCharm, WebStorm, etc.)
  • Neovim
  • Visual Studio
  • The GitHub website
  • GitHub Mobile

Cursor and Cody require you to either switch IDEs or install specific extensions. Copilot follows you everywhere.

If you’ve spent years customizing your IDE setup, Copilot lets you keep it. That’s worth a lot.

2. The Free Tier is Actually Useful

12,000 completions per month. Limited chat. No credit card required.

For hobbyists, students, and even many professionals β€” this is enough. You’re not hitting that limit unless you’re accepting completions all day every day.

The free tier math: 12,000 Γ· 22 working days = ~545 completions per day. That’s plenty for most developers.

Cody’s free tier is generous for chat (200/month) but that’s different from inline completions. Cursor’s free tier (2,000 completions) runs out fast.

3. GitHub Integration

If your team lives on GitHub:

  • Copilot understands your repos natively
  • Pull request summaries and reviews
  • Issue understanding
  • Workspace context from your GitHub projects

For teams already paying for GitHub Enterprise, adding Copilot Business is a no-brainer upsell. It’s designed to work together.

4. Model Flexibility

Copilot now lets you choose your model:

  • GPT-4o β€” Fast, reliable default
  • Claude 3.5 Sonnet β€” Better at complex reasoning
  • o1 β€” For hard problems that need deeper thinking

Most tools lock you to one model. Copilot gives you options within the same subscription.

5. The Safe, Conservative Choice

Copilot has been around longest. It’s stable. It’s backed by Microsoft/GitHub. It’s not going anywhere.

For enterprises worried about vendor risk, Copilot is the “nobody got fired for buying IBM” option.

Where Cursor Wins πŸ†

1. Agent Mode (The Killer Feature)

Cursor’s agent can:

  • Run terminal commands
  • Read and modify files across your project
  • Do semantic code search
  • Fix its own mistakes by running tests

This isn’t “generate code and paste it.” This is an AI that can actually DO things.

Real example: “Add authentication to this Express app.”

Copilot will generate some code in your current file. Cursor Agent will create the middleware file, update your routes, add environment variables, create the database migration, and test that it works.

That’s a fundamentally different level of capability.

2. Composer β€” Multi-File Generation That Works

Describe what you want in natural language. Cursor generates coherent code across multiple files simultaneously.

Other tools have tried this (Copilot Edits, Cody’s edit mode). Cursor’s implementation is the most reliable. It understands how your files relate to each other and generates code that actually fits together.

3. Project-Wide Context

Cursor doesn’t just see your current file. It understands your entire codebase:

  • Your folder structure
  • Your naming conventions
  • Your existing patterns
  • Related files and imports

When you ask it to add a feature, it writes code that matches your style. That’s the difference between “AI-generated code” and “code that fits your project.”

4. Tab Prediction

This sounds minor but changes how you code. Cursor predicts not just what you’ll type, but where you’ll edit next.

Finish a function? Tab. Cursor jumps to where you probably need to add the import statement. Accept it. Tab again. Cursor jumps to the test file.

It’s subtle until you use it, then you feel crippled without it.

5. The Developer’s Choice

Cursor is what developers recommend to each other. Check any Reddit thread, any Hacker News discussion, any coding Discord. The consensus is clear: if you want the best AI coding experience and don’t mind switching editors, Cursor is it.

πŸ“¬ Want more AI coding insights? Get weekly tool reviews and developer tips β€” subscribe to the newsletter.

Where Cody Wins πŸ†

1. Large Codebase Understanding (Sourcegraph’s Secret Weapon)

Cody is built by Sourcegraph β€” the company that invented universal code search. That matters.

When you connect Cody to your repositories, it builds a semantic understanding of your entire codebase. Not just your current project β€” your whole organization’s code.

This changes everything for enterprise developers:

  • “How does this microservice communicate with that one?”
  • “Where is this deprecated function still being used?”
  • “Show me all the places we handle authentication”

Cody can answer these across repositories. Copilot and Cursor are limited to your current project.

2. The Price is Right

Tool Free Pro/Individual Team
Copilot 12K completions $10/mo $19/mo
Cursor 2K completions $20/mo $40/mo
Cody 200 chats + unlimited autocomplete $9/mo Custom

Cody Pro at $9/month is the cheapest paid option. You get unlimited completions, 1000 chat messages, and access to multiple premium models.

For budget-conscious developers who want more than free tiers offer, Cody is compelling.

3. IDE Flexibility (Without the Copilot Lock)

Unlike Cursor (which IS an IDE), Cody works as an extension:

  • VS Code
  • JetBrains (all IDEs)
  • Neovim

You keep your existing setup. You don’t have to leave your carefully configured PyCharm or WebStorm.

This is Copilot’s strength too β€” but Cody often has better codebase understanding at a lower price.

4. Enterprise Code Intelligence

For organizations with sprawling codebases across multiple repositories, Cody + Sourcegraph is unmatched:

  • Cross-repository search and understanding
  • Batch changes across many repos
  • Code navigation that actually works at scale
  • Understanding of how code connects across microservices

If you work at a company with 500+ repos, ask your platform team about Sourcegraph. Cody is the AI layer on top of genuinely powerful infrastructure.

5. Model Variety on Free Tier

Even free Cody users get:

  • Claude 3.5 Sonnet
  • Claude 3.5 Haiku
  • Gemini 2.0 Flash
  • GPT-4o-mini

That’s legitimate model choice without paying anything. Copilot Free locks you to a single model. Cursor Free is heavily limited.

Head-to-Head: Key Comparisons

Inline Completions

Tool Speed Quality Multi-line
Copilot ⭐ Fastest Very good βœ… Yes
Cursor Fast ⭐ Best (context-aware) βœ… Yes
Cody Fast Very good βœ… Yes

Winner: Copilot for raw speed. Cursor for quality.

Multi-File Editing

Tool Feature Reliability Ease of Use
Copilot Copilot Edits Medium (can get stuck) Easy
Cursor Composer ⭐ High Medium learning curve
Cody Edit mode Good Easy

Winner: Cursor Composer. More capable, more reliable.

Agent/Autonomous Work

Tool Can Run Commands Self-Correction True Autonomy
Copilot ❌ No ❌ No ❌ No
Cursor βœ… Yes βœ… Yes ⭐ Yes
Cody ⚠️ Limited ⚠️ Limited ⚠️ Growing

Winner: Cursor, by a mile. Agent mode is the future.

Codebase Understanding

Tool Current File Current Project Cross-Repository
Copilot βœ… βœ… Good ❌
Cursor βœ… ⭐ Excellent ❌
Cody βœ… ⭐ Excellent ⭐ Yes (Sourcegraph)

Winner: Cody for enterprise. Cursor for local projects.

Pricing Deep Dive

Monthly Costs

Tier GitHub Copilot Cursor Cody
Free 12,000 completions, limited chat 2,000 completions, 50 slow requests Unlimited autocomplete, 200 chats
Individual $10/mo (Pro) $20/mo (Pro) $9/mo (Pro)
Team $19/user/mo $40/user/mo Custom

What You Get at Each Tier

Free Tier Comparison:

  • Copilot: Most generous for completions (12K/month)
  • Cursor: Most limited (2K completions burn fast)
  • Cody: Best chat limits (200/month), unlimited autocomplete

Individual/Pro Tier:

  • Copilot ($10): Unlimited completions, full chat, model choice
  • Cursor ($20): Composer, Agent mode, project context
  • Cody ($9): 1000 chats, premium models, Sourcegraph features

Value Ranking

  1. Best free: Copilot Free (if completions matter most)
  2. Best budget paid: Cody Pro at $9/month
  3. Best power-user: Cursor Pro at $20/month
  4. Best for teams: Copilot Business at $19/user/month (GitHub integration)

Decision Matrix

If You… Choose Why
Use JetBrains and won’t switch Copilot or Cody Cursor requires switching IDEs
Want max AI capability Cursor Agent mode + Composer is unmatched
Work on massive codebase Cody Sourcegraph’s cross-repo understanding
Want free and decent Copilot Free 12K completions/month, works everywhere
Team on GitHub Enterprise Copilot Business Native integration
Budget-conscious, want paid Cody Pro $9/mo is cheapest
Full-stack dev, don’t mind new IDE Cursor The consensus best
Already customized VS Code heavily Copilot or Cody Extensions, not new IDEs

The Honest Recommendation

All three are excellent. You’ll be more productive with any of them.

Get GitHub Copilot if:

  • You use multiple IDEs (especially JetBrains)
  • Your team is on GitHub and wants native integration
  • You want the safe, established choice
  • The free tier is enough for you

Get Cursor if:

  • You’re a power user who wants maximum AI capability
  • Agent mode and Composer appeal to you
  • You’re willing to use a new IDE (it’s basically VS Code)
  • You’re doing serious full-stack development

Get Cody if:

  • You work on large enterprise codebases
  • Cross-repository understanding matters
  • You want paid features at the lowest price
  • You prefer staying in JetBrains or existing IDE

The hybrid approach: Start with Copilot Free. If you hit limits or want more power, try Cursor Pro for a month. If you work on enterprise-scale code, evaluate Cody.

What I Actually Use

Daily driver: Cursor Pro. The Composer and Agent mode have genuinely changed how I build features. Worth $20/month.

Backup: Copilot Free installed everywhere. When I’m on a coworker’s machine or a server, it’s there.

For work projects: We use Copilot Business because the team is already on GitHub Enterprise and the integration is seamless.

The reality? Pick one and actually use it. The tool matters less than building the habit of working with AI assistance. If you want the latest on how these tools leverage MCP for deeper integrations, our best MCP servers guide covers what’s worth installing.

For a broader overview of all AI coding assistants including Claude Code and Windsurf, see our best AI coding assistants 2026 guide or the 7 best AI coding assistants ranked. Wondering which underlying AI model is best for coding? Our Claude vs GPT-4 for coding guide breaks it down. For a focused comparison of Cursor as an editor vs VS Code, check our Cursor vs VS Code guide. And if you want to try OpenAI’s agentic approach, read our Codex macOS app review.

πŸ“¬ Get weekly AI tool reviews and comparisons delivered to your inbox β€” subscribe to the AristoAIStack newsletter.

Keep Reading

  • 7 Best AI Coding Assistants Ranked
  • Cursor vs GitHub Copilot 2026
  • Cursor vs VS Code: Which AI Editor?
  • Claude vs GPT-4 for Coding
  • OpenAI Codex macOS App Review
  • AI Coding Agents: Cursor vs Windsurf vs Claude Code vs Codex

Last updated: February 2026

If You Depend on Eclipse Platform Technologies, Now Is the Time to Act

If You Depend on Eclipse Platform Technologies, Now Is the Time to Act

While IDE usage declines, the underlying Eclipse Platform technologies remain mature and widely deployed.

Eclipse Rich Client Platform (RCP) and many of the components delivered through the Eclipse Simultaneous Release are embedded in a large number of commercial products, including industrial software, engineering tools, embedded platforms, and other long lived systems.

Image
An Eclipse IDE logo, with the clock illustrating the ecosystem is at risk

πŸ”§ Mature technologies, widely deployed in real systems

These technologies are often deeply integrated into products with long maintenance horizons, strong stability requirements, and increasing security and compliance constraints.

This creates a growing imbalance: the platform continues to be relied upon in production, while the model (community contributors, corporate sponsors…) that historically financed its maintenance, security fixes, and coordinated releases is under pressure.

The Eclipse IDE is facing a structural shift. Its usage as a primary developer tool is steadily declining as development environments, workflows, and expectations evolve. This is not a criticism, it is an observable trend. What matters is the consequence: the historical funding and contribution model that sustained the Eclipse IDE and its underlying platform is weakening.

TL;DRΒ 

If your products rely on Eclipse Platform technologies from the Eclipse Simultaneous Release, continued usage alone will not secure their future. Active engagement is more than ever required, either through contributions, Working Group membership or direct sponsorship.Β 

Image
Futuristic architectural pillar inspired by Eclipse Platform design, supporting a heavy glass and steel structure, with subtle glowing cracks at its base indicating structural stress and sustainability risk.

🏒 A concrete responsibility for organisations that depend on them

For organisations relying on Eclipse Platform technologies, this is a tangible sustainability and risk management issue.

If components from the Eclipse Simultaneous Release are part of your products or internal platforms, this directly impacts:

  • long term maintenance and security posture
  • supply chain transparency and compliance
  • roadmap predictability and technical risk

The Eclipse IDE and RCP Working Group exists to provide a vendor neutral and structured framework for organisations that want to take responsibility for the technologies they depend on and contribute to their sustainability.

Image
Modern digital hourglass on a dark background, with glowing streams of binary code flowing downward, symbolising time pressure and the urgency to act on platform sustainability.

πŸ“œ SBOMs and the EU Cyber Resilience Act: compliance is not optional

As an illustration, among the projects delivered by the Eclipse Foundation is an open source SBOM generation tool for the Eclipse IDE and Platform projects, developed last year thanks to some funding from the Sovereign Tech Agency.Β 

This point deserves explicit attention: from 2026 onwards, SBOMs will be a key requirement under the EU Cyber Resilience Act. For many software products, particularly in regulated or industrial contexts, producing and maintaining accurate SBOMs will no longer be optional.

Maintaining compliant SBOM tooling on RCP-based products requires sustained effort:

  • alignment with evolving standards
  • reliable automation in build and release pipelines
  • long term maintenance and security updates

Without active support from the organisations that depend on these tools, we will not be able to maintain them at the level required for regulatory compliance.

Compliance is mandatory. Sustainability enables it.

Image
Exploded view of a complex technological sphere revealing internal software components, circuit boards, and data layers, with scanning light highlighting transparency, SBOMs, and regulatory compliance.

🀝Two concrete ways to engage

If your organisation has a dependency on Eclipse Platform technologies:

  • engage via the Eclipse IDE and RCP Working Group, connect with me or use https://eclipseide.org/membership/

If you are an individual or an organisation not ready for Working Group membership:

  • direct sponsorship provides an immediate way to support the platform https://www.eclipse.org/sponsor/ide/

If your products rely on these technologies, now is the right time to get involved.

Thomas Froment


Common Manual Testing Techniques and The Future Of Manual Testing in the age of AI

Introduction

Software testing plays a critical role in ensuring that applications function correctly, meet user expectations, and maintain high quality standards.Manual testing involves human testers executing test cases without automation tools, allowing them to evaluate software from a real user’s perspective.Manual testing techniques help testers identify defects, validate requirements, and ensure smooth user experiences.
This blog explores common manual testing types and techniques, explains Boundary Value Analysis and Decision Table Testing in detail, and discusses how manual testing is evolving in the modern AI-driven software industry.

1.Manual Testing Techniques

Manual testing techniques are structured methods used to design and execute test cases effectively. Unlike testing types, which define what to test, techniques focus on how to test.
Equivalence Partitioning
One widely used technique is Equivalence Partitioning, where input data is divided into logical groups with similar expected outcomes. Testers select representative values from each group, reducing the number of test cases while maintaining coverage.
Boundary Value Analysis
Boundary Value Analysis focuses on testing values at the edges of acceptable input ranges. Since many defects occur at boundaries, this technique helps identify errors related to limits and validations.
Decision Table Testing
Another important method is Decision Table Testing, which is used when software behaviour depends on multiple conditions. Testers create tables that map different input combinations to expected outcomes, ensuring all possible scenarios are tested systematically.
Exploratory Testing
Exploratory Testing allows testers to interact with the application freely without predefined scripts. This technique encourages creativity and helps uncover hidden defects and usability problems that structured tests might overlook.
Error Guessing
Error Guessing relies on the tester’s experience to anticipate potential problem areas, such as incorrect data handling or missing validations.
By applying these techniques, testers can create efficient test cases, improve coverage, and identify defects early in the development process.

Manual Testing Types

Manual testing types refer to the different levels or categories of testing activities performed during the software development process.
Functional Testing
One of the most common testing types is Functional Testing, which ensures that the application behaves according to business requirements and produces expected outputs for given inputs. Testers validate features such as login systems, payment processes, and data processing workflows.
Integration Testing
Another important type is Integration Testing, where testers verify that different modules or components work correctly when combined. This type is essential for identifying issues related to data flow, communication errors, or incorrect interactions between system components.
System Testing
System Testing evaluates the entire application as a complete system. Testers check whether the software meets overall requirements, including performance, reliability, and compatibility. This stage simulates real-world usage scenarios to confirm that the product is ready for deployment.
User Acceptance Testing (UAT)
User Acceptance Testing (UAT) is conducted from the end-user’s perspective. Stakeholders or clients verify whether the application meets business expectations and is suitable for real-world use before release.
Regression Testing
In addition, Regression Testing is performed whenever new features or updates are introduced. Testers ensure that previously working functionalities remain unaffected by recent changes.

2.Boundary Value Analysis (BVA)

Boundary Value Analysis is a test design technique that focuses on the edges or limits of input ranges. Defects frequently occur at boundary points due to incorrect logic or validation errors. Therefore, testing boundary values is often more effective than testing typical input values.
How It Works?
Testers identify minimum and maximum limits for input fields and then create test cases around those boundaries. Both valid and invalid boundary values are tested to ensure the system handles inputs correctly.
For example, if a field accepts values between 18 and 60:
Valid boundaries: 18 and 60
Invalid boundaries: 17 and 61
Testing these values helps detect issues such as incorrect validations or β€œoff-by-one” errors.
Advantages

  • Efficient test coverage with fewer test cases
  • High probability of detecting defects
  • Useful for numeric and data-driven applications

Random Example
In a banking application, a transfer limit might allow transactions up to β‚Ή50,000. Testing values like β‚Ή49,999, β‚Ή50,000, and β‚Ή50,001 ensures the system properly enforces transaction limits.

3.Decision Table Testing

Decision Table Testing is used when application behaviour depends on multiple conditions and their combinations. It helps testers systematically evaluate complex business rules and ensure all scenarios are covered.
How It Works?
Testers create a table listing:
Conditions (inputs or rules)
Possible actions or outcomes
Each row represents a unique combination of conditions, allowing testers to verify how the system responds in each case.
Advantages

  • Clear visualisation of complex logic
  • Reduces the risk of missing scenarios
  • Improves test coverage for rule-based systems

Random Example
Consider an e-commerce application offering discounts based on:
User type (new or existing)
Purchase amount (above or below β‚Ή5,000)
A decision table ensures that testers verify all possible combinations and confirm the correct discount is applied.
Decision tables are especially useful in financial systems, insurance applications, and e-commerce platforms where multiple business rules interact.

4.The Future of Manual Testing in the Age of AI

Artificial Intelligence is rapidly transforming the software industry, and software testing is no exception. With the rise of intelligent automation, predictive analytics, and AI-driven testing tools, many people wonder whether manual testing will disappear. However, instead of replacing manual testing, AI is reshaping its purpose and expanding the role of human testers.
Despite these advancements, manual testing continues to play a vital role in quality assurance. Human testers are better equipped to evaluate user interfaces, assess workflow logic, and detect unexpected issues that may not be captured by automated scripts. Exploratory testing, usability testing, and ethical decision-making require human insight and cannot be fully automated.

How AI Supports Manual Testing

  • AI does not eliminate manual testing; instead, it enhances testers’ capabilities by:
  • Automating repetitive regression tests
  • Generating test data and test cases
  • Predicting high-risk areas
  • Prioritising tests for faster releases
    These improvements allow manual testers to focus on complex testing activities such as exploratory testing and usability evaluation.

Skills Required for Future Manual Testers

  • To remain competitive in the AI era, manual testers should:
  • Understand automation and AI tools
  • Develop analytical and critical thinking skills
  • Focus on user experience and usability testing
  • Collaborate closely with developers and product teams

The role of testers is shifting from executing repetitive tests to providing strategic insights into product quality.
The future of manual testing lies in collaboration with AI rather than competition against it. Testers will increasingly act as quality analysts who interpret AI results, design meaningful test strategies, and ensure that software meets real business and user needs.

Conclusion

Manual testing remains a cornerstone of software quality assurance despite the rapid advancement of automation and artificial intelligence.
Boundary Value Analysis focuses on testing input limits where errors frequently occur, while Decision Table Testing ensures comprehensive coverage of complex business rules.
Although AI is transforming the testing landscape by automating repetitive tasks and improving efficiency, manual testing is far from obsolete. Instead, its role is evolving toward more strategic and creative activities that require human insight.
The future of manual testing lies in combining human expertise with AI-driven tools. Testers who adapt to these technological changes while maintaining strong analytical and user-focused skills will continue to play an essential role in delivering high-quality software.

Design HLD – Notification Sytem

Requirements

Functional Requirements

  1. Support sending notifications to users.
  2. Support delivery across multiple channels (Email, SMS, Push, In-app).
  3. Support critical and promotional notification types.
  4. Support user notification preferences and opt-in/opt-out.
  5. Support scheduled notifications.
  6. Support bulk notifications targeting large user groups.
  7. Support safe retries and idempotent notification processing.
  8. Support tracking of notification delivery status.

Non-Functional Requirements

  1. Highly available and fault tolerant.
  2. Low-latency delivery for critical notifications.
  3. High throughput with large-scale fan-out.
  4. Highly scalable with increasing traffic.
  5. Durable notification processing with no message loss.
  6. Secure notification delivery and access control.
  7. Cost-efficient operation at scale.

Key Concepts You Must Know

Notification vs Delivery Attempt

A notification represents the logical intent to notify a user, while delivery attempts represent concrete, channel-specific executions. A single notification can result in multiple delivery attempts due to retries, fallbacks, or multi-channel delivery.

Critical vs Promotional Isolation

Critical notifications such as OTPs or chat messages must be processed in isolation from promotional traffic. This prevents head-of-line blocking and guarantees that spikes in bulk or campaign traffic do not impact latency-sensitive notifications.

Priority-Aware Queuing

Notifications are routed through priority-aware queues so that high-priority messages are always processed ahead of lower-priority ones. This ensures predictable latency for critical flows even under heavy system load.

Idempotent Processing

All notification operations must be idempotent to safely handle retries caused by network failures or timeouts. Repeating the same request should always result in the same final state without creating duplicate notifications.

Safe Retries

Transient failures during delivery should trigger automatic retries using controlled retry policies such as exponential backoff. Retries must be bounded to avoid infinite loops and system overload.

Scheduling vs Immediate Delivery

Immediate notifications are dispatched as soon as they are accepted by the system, while scheduled notifications are stored and triggered at a future time. Scheduling logic must be reliable and time-correct to ensure notifications are sent neither early nor late.

Bulk Fan-out Model

Bulk notifications should be expanded asynchronously into individual notification instances. Fan-out must happen outside the critical path to prevent large campaigns from overwhelming the system.

User Preferences Enforcement

Notification delivery must respect user-configured preferences such as opt-in, opt-out, preferred channels, and quiet hours. Preferences are enforced consistently across all notification types, with configurable exceptions for critical messages.

Dead Letter Queue (DLQ)

Notifications that fail permanently after exhausting retries are moved to a Dead Letter Queue. The DLQ provides visibility, auditability, and a mechanism for manual inspection or reprocessing.

Durable Event Processing

Once a notification is accepted, it must be durably persisted so it is not lost due to crashes or restarts. Durability guarantees that every accepted notification is eventually processed or explicitly marked as failed.

Capacity Estimation

Key Assumptions

  • DAU (Daily Active Users): ~50 million
  • Notifications per user per day: ~5
  • Traffic mix: ~80% critical, ~20% promotional
  • Traffic pattern: Write-heavy with bursty fan-out
  • System scale: Large-scale, distributed SaaS system assumed

Notification Volume Estimation

Total notifications per day β‡’ 50M users Γ— 5 notifications β‡’ ~250M notifications/day
Critical notifications β‡’ ~80% of 250M β‰ˆ ~200M/day
Promotional notifications β‡’ ~20% of 250M β‰ˆ ~50M/day

Throughput Estimation (QPS)

Average write QPS β‡’ 250M / 86,400 β‡’ ~2,900 notifications/sec
Peak write QPS β‡’ Up to ~1,000,000 notifications/sec during spikes
Fan-out amplification β‡’ A single bulk request can expand into thousands to millions of notifications

Read Traffic Estimation

Status checks, analytics, dashboards β‡’ Reads assumed ~2–3Γ— writes β‡’ Average read QPS β‰ˆ ~6,000–9,000/sec

Metadata Size Estimation

Metadata per notification β‡’ ~1 KB (IDs, user, channel, status, retries, timestamps)
Metadata per day β‡’ 250M Γ— 1 KB β‡’ ~250 GB/day
Monthly metadata (30 days retention) β‡’ ~7.5 TB

Core Entities

  • User: Represents a system user who receives notifications.
  • Notification: Represents the logical intent to notify a user; stores type, priority, schedule, and lifecycle state, not delivery execution.
  • Delivery Attempt: Represents a single channel-specific attempt to deliver a notification and captures retries and failures.
  • Notification Preference: Represents user-defined preferences such as opt-in/opt-out, preferred channels, and quiet hours.
  • Campaign: Represents a bulk or promotional notification request that targets a large group of users.
  • Schedule: Represents a time-based trigger that controls when a notification or campaign should be delivered.
  • Retry Task: Represents a delayed retry for a failed delivery attempt using a retry policy.
  • Dead Letter Entry: Represents a permanently failed notification that requires audit or manual intervention.

Database Design

Database Choice

  • The system uses a distributed NoSQL database (such as Cassandra or DynamoDB) to store notification metadata. This is because the system needs to handle very high write traffic, scale horizontally, and remain fast even during large notification spikes.
  • Data is partitioned by tenant and user so that notifications are evenly spread across nodes and no single partition becomes a bottleneck. Time-based fields (like creation time) are used to efficiently query recent notifications and to clean up old data.
  • A relational database may be used for tenant configuration, billing, and reporting, where strong relationships and transactional queries are more important than write throughput.

Users Table

Represents system users.

User

user_id (PK)
tenant_id
created_at
status

Used for

  • User identity
  • Tenant isolation
  • Preference lookup

Notification Table

Represents a user-visible notification.

Notification

notification_id (PK)
user_id (FK β†’ User)
tenant_id
type (critical / promotional)
priority
status (pending / delivered / failed / expired)
scheduled_at
expiry_at
created_at

Key Points

  • One row per user notification
  • Represents intent and lifecycle
  • Used for auditing and status queries

DeliveryAttempt Table

Represents channel-level delivery execution.

DeliveryAttempt

attempt_id (PK)
notification_id (FK β†’ Notification)
channel (email / sms / push / in-app)
status (success / failed / retrying)
retry_count
last_error
created_at

Key Points

  • Multiple attempts per notification
  • Tracks retries and failures
  • Enables per-channel isolation

NotificationPreference Table

Represents user notification preferences.

NotificationPreference

user_id (PK)
channel
enabled
quiet_hours
updated_at

Key Points

  • Source of truth for opt-in / opt-out
  • Enforced during processing

Campaign Table

Represents bulk notification requests.

Campaign

campaign_id (PK)
tenant_id
status (scheduled / active / completed / cancelled)
scheduled_at
expiry_at
created_at

Key Points

  • Used only for bulk notifications
  • Expanded asynchronously into notifications

RetryTask Table

Represents scheduled retries.

RetryTask

retry_task_id (PK)
attempt_id (FK β†’ DeliveryAttempt)
next_retry_at
retry_policy
created_at

Key Points

  • Retries are time-based, not immediate
  • Drives retry scheduling

DeadLetter Table

Represents permanently failed notifications.

DeadLetter

notification_id
channel
failure_reason
created_at

Key Points

  • Terminal failure state
  • Used for audit and investigation

Indexing Strategy

| Access Pattern           | Index                 |
| ------------------------ | --------------------- |
| Fetch user notifications | (user_id, created_at) |
| Priority processing      | (priority, status)    |
| Retry scheduling         | (next_retry_at)       |
| Campaign expansion       | (campaign_id)         |
| Cleanup jobs             | (status, expiry_at)   |

Indexes are chosen based on actual query patterns, not theoretical normalization.

Transaction Model

  • The system avoids complex multi-table transactions. Each notification-related operation is handled as a single atomic write, which keeps the system fast and reliable.
  • To handle retries safely, the system uses idempotency keys, ensuring that the same request processed multiple times results in only one notification. Notification state moves forward in a controlled manner (for example: PENDING β†’ DELIVERED β†’ FAILED) and never moves backward.

This approach keeps the system correct even when requests are retried or processed in parallel.

Failure Handling

  • If a notification is saved successfully but delivery fails, it remains in a pending or retryable state and is retried automatically. Retry information is stored so the system can safely continue even after crashes or restarts.
  • Notifications that fail permanently are moved to a Dead Letter Queue, making failures visible and easy to investigate. Background jobs periodically scan for stuck or inconsistent records and safely recover or clean them up.

Consistency Model

  • The system uses strong consistency for critical data such as notification creation, status updates, retries, and user preferences. This ensures users do not receive duplicate or incorrect notifications.
  • For analytics and reporting, the system uses eventual consistency, since slight delays in metrics do not affect correctness. This balance allows the system to scale efficiently while keeping user-facing behavior correct.

API / Endpoints

Send Notification β†’ POST: /notifications

Creates a new notification request.

Request

{
  "user_id": "string",
  "type": "critical | promotional",
  "channels": ["email", "sms", "push"],
  "message": {
    "title": "string",
    "body": "string"
  },
  "schedule_at": "datetime (optional)",
  "expiry_at": "datetime (optional)",
  "idempotency_key": "string"
}

Response

{
  "status": "accepted",
  "notification_id": "uuid"
}

Send Bulk Notifications

Creates a bulk notification campaign. β†’ POST: /notifications/bulk

Request

{
  "campaign_name": "string",
  "type": "promotional",
  "target": {
    "segment_id": "string"
  },
  "channels": ["email", "push"],
  "message": {
    "title": "string",
    "body": "string"
  },
  "schedule_at": "datetime",
  "expiry_at": "datetime"
}

Response

{
  "status": "accepted",
  "campaign_id": "uuid"
}

Get Notification Status

Fetches the current status of a notification. β†’ GET: /notifications/{notification_id}

Response

{
  "notification_id": "uuid",
  "status": "pending | delivered | failed | expired",
  "last_updated": "datetime"
}

Retry Notification (Internal / Admin)

Triggers a retry for a failed notification. β†’ POST: /notifications/{notification_id}/retry

Response

{
  "status": "retry_scheduled"
}

Cancel Scheduled Notification

Cancels a notification that has not yet been delivered. β†’ DELETE: /notifications/{notification_id}

Response

{
  "status": "cancelled"
}

Get User Notification Preferences

Fetches notification preferences for a user. β†’ GET: /users/{user_id}/preferences

Response

{
  "channels": {
    "email": true,
    "sms": false,
    "push": true
  },
  "quiet_hours": {
    "start": "22:00",
    "end": "08:00"
  }
}

Update User Notification Preferences

Updates notification preferences for a user. β†’ PUT: /users/{user_id}/preferences

Request

{
  "channels": {
    "email": true,
    "sms": false,
    "push": true
  },
  "quiet_hours": {
    "start": "22:00",
    "end": "08:00"
  }
}

Response

{
  "status": "updated"
}

List Notifications (Optional)

Fetches recent notifications for a user. β†’ GET: /users/{user_id}/notifications?limit=20

Response

{
  "notifications": [
    {
      "notification_id": "uuid",
      "status": "delivered",
      "created_at": "datetime"
    }
  ]
}

Key API Design Notes

  • All write APIs are idempotent using idempotency_key.
  • APIs are asynchronous; delivery is not guaranteed at request time.
  • Bulk APIs only enqueue campaigns; fan-out happens asynchronously.
  • Admin and retry APIs are restricted to internal services.

System Components

1. Client (Web / Mobile / Backend Producers)

Primary Responsibilities:

  • Generates notification requests in response to user actions or system events such as login, payment, chat messages, or campaigns.
  • Attaches idempotency keys and contextual metadata (user, tenant, type, priority).
  • Does not wait for delivery completion and treats notification APIs as asynchronous.

Examples:
Web apps, Mobile apps, Order Service, Auth Service, Chat Service

Why:
Keeps product services simple and prevents notification latency from impacting core user flows.

2. API Gateway

Primary Responsibilities:

  • Acts as the secure ingress layer for all notification APIs.
  • Performs authentication, authorization, tenant validation, schema validation, and request normalization
  • Applies per-tenant and per-client rate limits to protect downstream systems.
  • Rejects duplicate requests early using idempotency keys when possible.

Examples:
AWS API Gateway, Kong, NGINX, Envoy

Why:
Provides centralized security, traffic control, and isolation at scale.

3. Notification Service (Control Plane)

Primary Responsibilities:

  • Validates notification requests and applies business rules.
  • Classifies notifications as critical or promotional and assigns priority.
  • Fetches and enforces user preferences including opt-in, channel selection, and quiet hours.
  • Validates scheduling and expiry constraints.
  • Persists notification metadata as the source of truth.
  • Publishes notification events to the message queue for further processing.

Examples:
Spring Boot / Node.js / Go microservice

Why:
Centralizes orchestration logic while keeping the system asynchronous and scalable.

4. Message Queue / Event Bus

Primary Responsibilities:

  • Decouples notification ingestion from processing and delivery.
  • Buffers traffic spikes and absorbs bursty workloads.
  • Provides ordering guarantees where required (e.g., per user).
  • Uses separate topics or queues to isolate critical traffic from promotional traffic.
  • Ensures at-least-once delivery semantics.

Examples:
Apache Kafka, AWS SNS + SQS

Why:
Enables high-throughput, fault-tolerant, and scalable event-driven processing.

5. Scheduler Service

Primary Responsibilities:

  • Stores and manages scheduled notifications and delayed retry tasks.
  • Triggers notification events exactly at their scheduled execution time.
  • Ensures notifications are not delivered before schedule_at or after expiry_at.
  • Handles large volumes of scheduled tasks using partitioned or sharded scheduling.

Examples:
Kafka delay topics, Redis Sorted Sets, Quartz, AWS EventBridge

Why:
Provides reliable time-based execution without inefficient polling.

6. Campaign / Fan-out Service

Primary Responsibilities:

  • Processes bulk notification requests and resolves target audiences.
  • Expands campaigns into per-user notification events asynchronously.
  • Applies batching, throttling, and backpressure to control fan-out rate.
  • Tracks campaign progress and completion state.

Examples:
Custom fan-out service + Kafka consumers, Flink/Spark for very large campaigns

Why:
Prevents large campaigns from overwhelming real-time notification flows.

7. Channel Workers – Email

Primary Responsibilities:

  • Consumes email notification events and formats email content.
  • Integrates with email providers and handles provider-specific constraints.
  • Manages retries, bounces, and transient failures.
  • Emits delivery results back into the system.

Examples:
Amazon SES, SendGrid, Mailgun

Why:
Email delivery requires specialized handling and independent scaling.

8. Channel Workers – SMS

Primary Responsibilities:

  • Delivers SMS notifications with low latency.
  • Handles provider throttling, regional routing, and failover.
  • Normalizes errors from different providers into a common failure model.

Examples:
Twilio, Vonage (Nexmo), AWS SNS

Why:
SMS delivery is latency-sensitive and highly provider-dependent.

9. Channel Workers – Push

Primary Responsibilities:

  • Sends push notifications to mobile and web devices.
  • Manages device tokens, expiration, and invalid token cleanup.
  • Handles platform-specific delivery semantics and retries.

Examples:
Firebase Cloud Messaging (FCM), Apple Push Notification Service (APNs)

Why:
Push platforms require tight integration with OS-level services.

10. Channel Workers – In-App

Primary Responsibilities:

  • Delivers real-time notifications to active users over persistent connections.
  • Maintains connection state and fan-out to connected clients.
  • Falls back gracefully when users are offline.

Examples:
WebSockets, Server-Sent Events (SSE), Redis Pub/Sub

Why:
Provides the lowest-latency notification path for active users.

11. Retry Service

Primary Responsibilities:

  • Tracks failed delivery attempts and retry counts.
  • Applies retry policies such as exponential backoff and maximum retry limits.
  • Schedules retries through the Scheduler Service.
  • Ensures retries are controlled and do not cause retry storms.

Examples:
Kafka retry topics, Redis delay queues, SQS with visibility timeout

Why:
Improves reliability while protecting the system under failure conditions.

12. Dead Letter Queue (DLQ)

Primary Responsibilities:
Stores notifications that fail permanently after all retries.
Captures failure context and error metadata.
Supports auditing, alerting, and optional manual reprocessing.

Examples:
Kafka DLQ topics, AWS SQS DLQ

Why:
Ensures failures are visible and never silently dropped.

13. Preference Service

Primary Responsibilities:
Stores user notification preferences and channel-level settings.
Provides low-latency reads for preference enforcement.
Acts as the single source of truth for opt-in and quiet hours.

Examples:
Microservice + Redis cache + DynamoDB/Cassandra

Why:
Preference checks are on the critical path and must be fast and consistent.

14. Metadata Database

Primary Responsibilities:
Stores notification lifecycle state, delivery attempts, retry metadata, and audit logs.
Supports strong consistency for state transitions.
Optimized for high write throughput and time-based access patterns.

Examples:
Cassandra, DynamoDB, ScyllaDB

Why:
Designed for massive scale and durability under heavy write load.

15. Cache

Primary Responsibilities:

  • Caches hot data such as preferences, idempotency keys, and rate-limit counters.
  • Reduces load on the primary database and lowers latency.

Examples:
Redis, Memcached

Why:
Improves performance and protects databases under peak load.

16. Analytics & Tracking Service

Primary Responsibilities:

  • Consumes delivery events asynchronously.
  • Generates metrics for success rate, latency, retries, and failures.
  • Supports dashboards, alerts, and reporting.

Examples:
Kafka Streams, Flink, ClickHouse, BigQuery

Why:
Separates observability from the critical delivery path.

17. Monitoring & Alerting Service

Primary Responsibilities:

  • Tracks system health, queue lag, error rates, and SLOs.
  • Triggers alerts for abnormal behavior or degradation.

Examples:
Prometheus, Grafana, Datadog

Why:
Early detection is critical in high-throughput systems.

18. Logging Service

Primary Responsibilities:
Aggregates logs from all services for debugging and audits.
Supports correlation across distributed requests.

Examples:
ELK Stack, OpenSearch

Why:
Distributed systems require centralized visibility.

19. Security & Secrets Management

Primary Responsibilities:

  • Manages encryption keys, API credentials, and sensitive configuration.
  • Enforces encryption at rest and in transit.

Examples:
AWS KMS, HashiCorp Vault, AWS Secrets Manager

Why:
Protects sensitive data and ensures compliance.

High-Level Flows

Flow 0: Default Notification Flow (Happy Path)

This is the baseline flow that everything else builds on.

  • Client sends a notification request with an idempotency key to the API Gateway.
  • API Gateway authenticates the client, validates the request, and applies rate limits.
  • Request is forwarded to the Notification Service.
  • Notification Service: Validates payload, Classifies notification type (critical / promotional), Assigns priority, Fetches and enforces user preferences, Validates scheduling and expiry
  • Notification metadata is written durably to the database.
  • Notification Service publishes an event to the appropriate queue/topic.
  • Channel Worker consumes the event and sends the notification via the provider.
  • Delivery result is recorded and emitted to analytics.

Guarantee: Notification is accepted, processed asynchronously, and delivered successfully.

Flow 1: Critical Notification (Low-Latency Path)

  • Notification is classified as critical (OTP, chat, security alert).
  • Event is published to a high-priority queue/topic.
  • Dedicated high-priority Channel Workers consume the event immediately.
  • Worker sends notification to the provider with aggressive timeouts.
  • Delivery result is recorded synchronously.

Guarantee: Sub-second p99 latency, No impact from bulk or promotional traffic

Flow 2: Promotional Notification (Best-Effort Path)

  • Notification is classified as promotional.
  • Notification Service enforces: Opt-in / opt-out, Quiet hours, Frequency caps, Expiry time
  • Event is published to a low-priority queue/topic.
  • Workers process messages opportunistically.
  • Before sending, expiry is re-checked.

Guarantee: Delivered only within validity window, Never blocks critical traffic

Flow 3: Scheduled Notification

  • Client provides schedule_at.
  • Notification Service stores the notification in scheduled state.
  • Scheduler Service tracks the schedule using a time-indexed store.
  • At trigger time, Scheduler publishes the event to the queue.
  • Normal delivery flow resumes.

Guarantee: Sent exactly at scheduled time, No early or late delivery

Flow 4: Bulk Notification / Campaign (Fan-out)

  • Client creates a bulk campaign.
  • Notification Service stores campaign metadata.
  • Campaign Service resolves target users asynchronously.
  • Campaign is expanded into per-user notifications in batches.
  • Batched events are published gradually with throttling.
  • Channel Workers deliver independently.

Guarantee: Fan-out is controlled, Bulk traffic never overloads real-time flows

Flow 5: Retry on Transient Failure

Failure Detection

  • Channel Worker calls provider.
  • Provider returns transient error: Timeout, 5xx, Rate limit, Network error

Retry Handling

  • Worker records failure and retry count.
  • Retry Service evaluates retry policy: Is error retryable? Retry count < max?
  • Retry Service computes next retry time (exponential backoff).
  • Retry is scheduled via Scheduler Service.
  • Scheduler republishes the event at retry time.
  • Worker retries delivery.

Guarantee: Safe retries, No retry storms, System remains stable under partial outages

Flow 6: Provider Failover (Multi-Vendor)

  • Channel Worker detects provider degradation: High error rate, Throttling, Timeouts.
  • Circuit breaker opens for the failing provider.
  • Traffic is shifted to a secondary provider (if configured).
  • Delivery attempts continue via backup provider.
  • Primary provider is retried after cool-down.

Guarantee: High availability despite provider outages, Graceful degradation

Flow 7: Permanent Failure β†’ DLQ

  • Notification exceeds maximum retry attempts OR
  • Error is classified as non-retryable (invalid number, blocked email).
  • Notification is marked as failed.
  • Payload and failure context are written to DLQ.
  • Alerts are triggered for investigation.

Guarantee: No silent drops, Full auditability

Flow 8: Idempotent Request Handling

  • Client retries request due to timeout.
  • API Gateway / Notification Service checks idempotency key.
  • Duplicate request is detected.
  • Existing notification reference is returned.

Guarantee: No duplicate notifications, Safe client retries

Flow 9: Cancellation of Scheduled Notification

  • Client requests cancellation.
  • Notification Service validates state.
  • Notification is marked cancelled.
  • Scheduler skips execution if encountered.

Guarantee: Safe cancellation before delivery

Flow 10: Expiry Enforcement

  • Notification has expiry_at.
  • Before delivery, worker checks current time.
  • If expired: Delivery is skipped, Status is marked expired

Guarantee: Promotions are never delivered late

Flow 11: Per-User Ordering (When Required)

  • Notifications are keyed by user/device.
  • Queue guarantees ordering per key.
  • Workers process in order for each user.

Guarantee: Correct ordering for chat and conversational flows

Flow 12: Analytics & Tracking

  • Workers emit delivery events.
  • Analytics Service consumes asynchronously.
  • Metrics, dashboards, and alerts update.

Guarantee: Observability without impacting delivery latency

Deep Dives – Functional Requirements

1. Support Sending Notifications to Users

  • The system exposes asynchronous APIs that allow internal services and external clients to trigger notifications in a non-blocking manner.
  • Once a request is accepted, notification intent is durably persisted, ensuring the notification is not lost even if downstream components fail.

2. Support Delivery Across Multiple Channels

  • Notifications can be delivered through Email, SMS, Push, and In-app channels.
  • Each channel is implemented as an independent delivery pipeline with its own workers, providers, retry logic, and scaling policy, preventing failures in one channel from impacting others.

3. Support Critical and Promotional Notification Types

  • Notifications are classified at ingestion time based on type and priority.
  • Critical notifications are routed through high-priority queues and dedicated workers to guarantee low latency, while promotional notifications are routed through low-priority paths that tolerate delay and throttling.

4. Support User Notification Preferences and Opt-In/Opt-Out

  • User preferences such as channel enablement, quiet hours, and frequency limits are enforced before delivery.
  • Preferences are cached for low-latency access and treated as the source of truth, with limited and explicit overrides allowed for critical system alerts.

5. Support Scheduled Notifications

  • The system allows notifications to be scheduled for future delivery using a distributed scheduler.
  • Scheduled notifications are triggered exactly at the specified time, survive service restarts, and are validated against expiry constraints before being dispatched.

6. Support Bulk Notifications Targeting Large User Groups

  • Bulk notifications are modeled as campaigns that are expanded asynchronously into per-user notifications.
  • Fan-out is performed in batches with throttling and backpressure to protect downstream systems and preserve the performance of real-time notifications.

7. Support Safe Retries and Idempotent Processing

  • All notification operations use idempotency keys to ensure retries do not create duplicates.
  • Delivery failures are retried using controlled retry policies such as exponential backoff, with retry state persisted to survive crashes and restarts.

8. Support Tracking of Notification Delivery Status

  • Each notification and its delivery attempts are tracked through well-defined lifecycle states.
  • Delivery events are emitted asynchronously to analytics systems, enabling auditing, monitoring, and reporting without impacting delivery latency.

Non-Functional Requirements

1. Highly Available and Fault Tolerant

  • The system is composed of stateless services deployed across multiple availability zones.
  • All critical state (notification metadata, retry state, schedules) is stored in replicated and durable systems.
  • Failures of individual services, nodes, or zones do not result in downtime or message loss.

2. Low-Latency Delivery for Critical Notifications

  • Critical notifications are isolated using priority-aware queues and dedicated worker pools.
  • This prevents head-of-line blocking from bulk or promotional traffic.
  • The critical delivery path minimizes synchronous work to achieve predictable sub-second p99 latency.

3. High Throughput with Large-Scale Fan-out

  • The system uses asynchronous ingestion and delivery pipelines backed by high-throughput message queues.
  • Bulk notifications are expanded and delivered in batches with controlled fan-out rates.
  • This allows the system to sustain millions of notifications per second during peak events.

4. Highly Scalable with Increasing Traffic

  • All components scale horizontally and independently.
  • API servers scale with request volume, queues scale via partitioning, and workers scale based on backlog and lag.
  • Capacity increases linearly by adding instances, without architectural changes.

5. Durable Notification Processing with No Message Loss

  • Once a notification request is accepted, it is durably persisted before processing begins.
  • At-least-once delivery guarantees ensure notifications are eventually processed even after crashes or restarts.
  • Explicit lifecycle states prevent silent drops or stuck notifications.

6. Secure Notification Delivery and Access Control

  • All APIs are authenticated and authorized at the gateway layer with tenant-level isolation.
  • Sensitive data is encrypted both in transit and at rest.
  • Access to external delivery providers is tightly controlled using scoped credentials and secret rotation.

7. Cost-Efficient Operation at Scale

  • The system avoids synchronous delivery and keeps the critical path lightweight.
  • Promotional traffic is throttled and deprioritized to reduce peak infrastructure costs.
  • Analytics and reporting are handled asynchronously, keeping delivery fast and cost-efficient.

Trade Offs

1. At-Least-Once Delivery vs Exactly-Once Delivery

Choice: At-least-once delivery with idempotent processing.

Pros

  • Ensures no notification is ever lost.
  • Simplifies system design and improves throughput.

Cons

  • Duplicate delivery attempts are possible in failure scenarios.

Why This Works
Idempotency keys and state tracking prevent user-visible duplicates while preserving durability, which is more critical than strict exactly-once semantics.

2. Priority Isolation vs Single Unified Queue

Choice: Separate queues and workers for critical and promotional notifications.

Pros

  • Guarantees low latency for critical notifications.
  • Prevents promotional spikes from impacting OTPs or chat messages.

Cons

  • Increases operational complexity and infrastructure cost.

Why This Works
Latency guarantees for critical traffic are non-negotiable in real systems, and isolation is the simplest and most reliable way to enforce them.

3. Asynchronous Processing vs Synchronous Delivery

Choice: Asynchronous notification ingestion and delivery.

Pros

  • Enables very high throughput and resilience to downstream failures.
  • Protects clients from provider latency and outages.

Cons

  • Clients do not get immediate delivery confirmation.

Why This Works
Notifications are inherently asynchronous, and durability plus retries provide stronger guarantees than blocking APIs.

4. Fan-out at Write Time vs Fan-out at Read Time

Choice: Fan-out at write time for bulk and campaign notifications.

Pros

  • Simplifies delivery logic and tracking.
  • Allows per-user preference checks and rate limiting.

Cons

  • Higher write amplification and storage usage.

Why This Works
Write-heavy fan-out enables precise control, retries, and auditing, which are required for large-scale notification platforms.

5. Strong Consistency vs Eventual Consistency

Choice: Strong consistency for notification state, eventual consistency for analytics.

Pros

  • Prevents duplicate deliveries and inconsistent user experience.
  • Improves availability and performance for non-critical data.

Cons

  • Analytics may lag slightly behind real-time.

Why This Works
Users care about correct delivery, not real-time dashboards. Separating consistency models optimizes both correctness and scale.

6. Centralized Preference Checks vs Cached Preferences

Choice: Cache-first preference checks with database fallback.

Pros

  • Reduces latency and database load.
  • Supports real-time delivery at scale.

Cons

  • Cache invalidation adds complexity.

Why This Works
Preferences change infrequently compared to delivery volume, making caching a high-impact optimization.

7. Single Provider vs Multi-Provider Strategy

Choice: Multi-provider integration for email and SMS.

Pros

  • Improves reliability and reduces vendor lock-in.
  • Enables failover during provider outages.

Cons

  • Higher integration and operational complexity.

Why This Works
External providers are unreliable by nature; redundancy is essential for critical notifications.

8. Aggressive Retries vs Controlled Backoff

Choice: Controlled retries with exponential backoff.

Pros

  • Prevents retry storms and provider overload.
  • Improves system stability under failure.

Cons

  • Retries may introduce delivery delays.

Why This Works
Stability and provider trust are more important than aggressive retrying, especially at high scale.

9. Immediate Deletion vs Retained Delivery Logs

Choice: Retain notification logs with configurable TTL.

Pros

  • Supports auditing, debugging, and compliance.
  • Enables analytics and reporting.

Cons

  • Requires additional storage.

Why This Works
Storage is cheap compared to the cost of missing audit data in incidents or compliance scenarios.

10. Cost Optimization vs Peak Performance

Choice: Optimize cost for promotional traffic, optimize performance for critical traffic.

Pros

  • Keeps infrastructure costs predictable.
  • Protects user experience for high-priority notifications.

Cons

  • Promotional notifications may be delayed during peak load.

Why This Works
Business impact of delayed promotions is far lower than delayed critical alerts.

Frequently Asked Questions in Interviews

Q. Why do we separate critical and promotional notifications?

  • Critical notifications (OTP, security alerts, chat messages) have strict latency and reliability SLOs, while promotional notifications can tolerate delays.
  • By isolating them into separate queues, partitions, and worker pools, we prevent head-of-line blocking where a promotional spike could delay time-sensitive messages.
  • This guarantees predictable latency for critical traffic even during large campaigns.

Q. Why is at-least-once delivery preferred over exactly-once delivery?

  • Exactly-once delivery requires distributed transactions across queues, databases, and external providers, which is expensive and fragile at scale.
  • At-least-once delivery guarantees durability and availability, which are more important for notifications.
  • User-visible duplicates are avoided using idempotency keys and state checks, achieving practical correctness with far lower complexity.

Q. How do you prevent duplicate notifications during retries?

  • Each notification has a globally unique notification ID or idempotency key.
  • Before sending, workers check the persisted delivery state to ensure the notification hasn’t already been delivered.
  • Retries update state atomically, so even if the same message is processed twice, only one delivery attempt succeeds.

Q. How do you handle massive fan-out for promotional campaigns?

  • Bulk campaigns are expanded asynchronously rather than synchronously at API time.
  • The system processes recipients in batches, applies preferences and rate limits, and enqueues individual delivery tasks gradually.
  • Fan-out rate is throttled to protect downstream providers and internal infrastructure.

Q. What happens if the notification service crashes mid-processing?

  • All important state transitions are persisted before moving to the next step.
  • If a worker crashes after pulling a message but before acknowledging it, the message is re-delivered by the queue.
  • Because processing is idempotent, retries do not corrupt state or cause duplicates.

Q. How is per-user ordering guaranteed?

  • Notifications are partitioned by user ID (or user-channel key) in the message queue.
  • Consumers process messages sequentially within a partition, ensuring ordering for a given user.
  • Global ordering is intentionally not guaranteed, as it does not scale and is unnecessary.

Q. How do you handle external provider failures (SMS, Email, Push)?

  • Providers are treated as unreliable dependencies.
  • Each provider integration includes timeouts, bounded retries, and circuit breakers.
  • Failures are retried later or routed to fallback providers if configured.

Q. What if a provider is slow but not fully down?

  • Latency-based circuit breakers detect degradation even when errors are low.
  • Traffic is gradually reduced or paused to avoid queue buildup and cascading failures.
  • This protects system stability and prevents retry storms.

Q. How do you ensure users don’t receive expired promotions?

  • Promotional notifications include an explicit expiration timestamp.
  • Workers validate the expiry at delivery time and discard expired notifications immediately.
  • This ensures correctness even if notifications are delayed due to retries or backpressure.

Q. How are user preferences enforced at scale?

  • User preferences are cached in memory (e.g., Redis) for fast access.
  • The database remains the source of truth but is only consulted on cache misses or updates.
  • This allows preference checks to be performed inline without adding latency.

Q. How do you support scheduled notifications at large scale?

  • Scheduled notifications are stored in time-partitioned storage keyed by execution time.
  • A scheduler scans upcoming time windows and enqueues notifications just-in-time for delivery.
  • This avoids keeping millions of delayed messages sitting in queues.

Q. How do you prevent notification spam?

  • Rate limits are applied per user, per channel, and per tenant.
  • Promotional notifications are capped daily, while critical notifications bypass limits.
  • This protects user experience without impacting essential communication.

Q. How is multi-tenancy handled?

  • Each tenant has isolated identifiers, quotas, rate limits, and metrics.
  • Traffic from one tenant cannot starve resources for others.
  • Billing and usage tracking are enforced at the tenant level.

Q. How do you monitor system health?

  • Metrics track queue depth, consumer lag, latency percentiles, retry rates, and provider errors.
  • Dashboards provide real-time visibility, and alerts trigger when SLOs are violated.
  • This allows proactive issue detection before users are impacted.

Q. How do you debug a missing or delayed notification?

  • Every notification has a traceable lifecycle with immutable logs.
  • Operators can trace a notification ID across ingestion, scheduling, retries, and delivery attempts.
  • Dead Letter Queues preserve full context for permanent failures.

Q. What are the biggest scalability bottlenecks?

  • Metadata writes, fan-out amplification, and external provider rate limits.
  • These are mitigated using partitioning, batching, caching, and backpressure.
  • Provider limits often become the true ceiling, not internal infrastructure.

Q. How does the system behave under extreme load?

  • Critical notifications continue to flow with priority.
  • Promotional traffic is throttled, delayed, or dropped first.
  • The system degrades gracefully instead of failing catastrophically.

Q. Why not make notification delivery synchronous?

  • Synchronous delivery couples system availability to external providers.
  • Any provider latency or outage would block clients and reduce availability.
  • Asynchronous processing decouples ingestion from delivery and improves resilience.

Q. How would the system change at 10Γ— or 100Γ— scale?

  • The architecture remains the same.
  • We increase partitions, workers, and regional deployments.
  • No redesign is requiredβ€”only capacity expansion.

Q. How do you add a new notification channel (e.g., WhatsApp)?

  • Add a new channel processor and provider integration.
  • Core ingestion, scheduling, retry, and tracking logic remains unchanged.
  • This keeps the system extensible and pluggable.

Q. What guarantees does the system actually provide?

  • Near-real-time delivery for critical notifications.
  • At-least-once delivery with idempotency.
  • Per-user ordering where required.
  • No delivery after expiry for promotions.

High-Level Summary

This notification system delivers low-latency, highly reliable critical notifications while supporting large-scale promotional fan-out without interference.
It uses an asynchronous, event-driven architecture with durable queues, idempotent processing, and safe retries to prevent message loss or duplication.
Traffic isolation, rate limiting, and expiry checks ensure correctness and user experience even during spikes or provider failures.
The system scales linearly and cost-efficiently, matching real-world production notification platforms.

Feel free to ask questions or share your thoughts β€” happy to discuss!

🧠 JavaScript Type Coercion β€” A Question That Teaches

Let’s talk about a JavaScript expression that looks wrong but is 100% valid πŸ‘€

[] == ![]

At first glance, most people expect this to be false.

πŸ‘‰ But the result is true.

Let’s break it down step by step, using JavaScript’s actual rules β€” no magic, no guessing

Step 1: Evaluate the logical NOT

![]
  • [] is an object
  • All objects in JavaScript are truthy
  • Applying ! to a truthy value results in false

So the expression becomes:

[] == false

Step 2: Loose equality (==) triggers type coercion

In JavaScript’s loose equality (==) logic, if one of the operands is a Boolean, it is always converted to a Number before the comparison continues.

The Conversion Rule

According to the ECMAScript specification, the process for [] == false (for example) looks like this:

  • Boolean to Number: false becomes 0. (Conversely, true becomes 1).
  • The Resulting Comparison: Now the engine is looking at [] == 0.
  • Object to Primitive: Since one side is a number and the other is an object (the array), JavaScript triggers the ToPrimitive process on the array
[] == 0

Step 3: Object-to-primitive conversion

  • When using the loose equality operator (==) to compare an object to a primitive, JavaScript uses the “default” hint, which almost always behaves like the Number sequence:
  • valueOf() is called first. (For most plain objects and arrays, this just returns the object itself).
  • toString() is called second because valueOf didn’t provide a primitive.

For an empty array:

[].toString() // ""

So now the comparison becomes:

"" == 0

Step 4: Final coercion

In a string vs. number comparison, the string is converted to a number. Number(“”) is 0.

"" (empty string) β†’ 0

Comparison becomes:

0 == 0

βœ… Result: true

πŸ”‘ Key Takeaways

  • JavaScript follows strict, deterministic coercion rules
  • == allows implicit conversions that can be surprising
  • Arrays convert to strings
  • Booleans convert to numbers
  • This behavior is predictable once you know the rules
  • This is exactly why === is recommended in most production code.

Type Coercion with the + Operator

πŸ”Ή Case 1: [] + 1

πŸ‘‰ Result: “1”

Why?

  • + is special in JavaScript It can mean addition or string concatenation
  • When one operand becomes a string, concatenation wins
  • [] β†’ “” (empty string)

Expression becomes:

"" + 1 β†’ "1"

Type Coercion with the – Operator

πŸ”ΉCase 2: [] – 1

πŸ‘‰ Result: -1

Why?

  • – is numeric only
  • JavaScript forces both sides to numbers
[] β†’ "" β†’ 0

Expression becomes:

0 - 1 β†’ -1

πŸš€ Challenge (Object Comparison)

Now that we understand arrays, here’s a bit tougher one:

{} == !{}
{} - 1
{} + 1

Same language. Same coercion rules.

πŸ‘‰ What do you think the output is β€” and why?

BoldKit Now Supports Vue 3: 45+ Neubrutalism Components for Vue Developers

Hey Vue developers! πŸ‘‹

Remember BoldKit, the neubrutalism component library I introduced a few weeks ago? Well, I’ve got exciting news β€” BoldKit v2.0 is here with full Vue 3 support!

If you missed the original announcement, BoldKit brings the bold, raw aesthetic of neubrutalism to your projects with thick borders, hard shadows, and high-contrast colors that make your UI pop.

BoldKit Preview

What’s New in v2.0?

The entire component library has been ported to Vue 3:

  • 45+ components built with Composition API
  • 35 SVG shapes for decorative elements
  • 16 chart types powered by vue-echarts
  • 2 templates (Landing Page & Portfolio)
  • Full TypeScript support
  • Compatible with shadcn-vue CLI

Quick Start

Getting started is dead simple. If you’re using shadcn-vue:

# Install a single component
npx shadcn-vue@latest add https://boldkit.dev/r/vue/button.json

# Install multiple components
npx shadcn-vue@latest add https://boldkit.dev/r/vue/button.json https://boldkit.dev/r/vue/card.json https://boldkit.dev/r/vue/input.json

Or set up the registry alias in your components.json:

{
  "registries": {
    "@boldkit": "https://boldkit.dev/r/vue"
  }
}

Then install with:

npx shadcn-vue@latest add @boldkit/button @boldkit/card @boldkit/dialog

Code Example

Here’s what a simple card looks like in Vue:

<script setup lang="ts">
import { Button } from '@/components/ui/button'
import { Card, CardHeader, CardTitle, CardContent } from '@/components/ui/card'
import { Badge } from '@/components/ui/badge'
</script>

<template>
  <Card>
    <CardHeader class="bg-primary">
      <CardTitle class="flex items-center gap-2">
        Welcome to BoldKit
        <Badge variant="secondary">New</Badge>
      </CardTitle>
    </CardHeader>
    <CardContent class="space-y-4">
      <p>Build bold, beautiful interfaces with ease.</p>
      <div class="flex gap-2">
        <Button>Primary</Button>
        <Button variant="secondary">Secondary</Button>
        <Button variant="accent">Accent</Button>
      </div>
    </CardContent>
  </Card>
</template>

Clean, readable, and fully typed. Just how Vue should be. 😎

Vue-Specific Tech Stack

BoldKit Vue is built on solid foundations:

Package Purpose
Reka UI Headless primitives (Vue port of Radix UI)
vue-echarts Charts and data visualization
vue-sonner Toast notifications
vaul-vue Drawer component
lucide-vue-next Icons
class-variance-authority Variant management

All components use the <script setup> syntax with full TypeScript support and proper type inference.

What’s Included?

Form Components

Button, Input, Textarea, Checkbox, Radio Group, Select, Switch, Slider, Label, Input OTP

Layout & Containers

Card, Layered Card, Dialog, Drawer, Sheet, Accordion, Collapsible, Tabs, Scroll Area, Aspect Ratio, Separator

Feedback & Status

Alert, Alert Dialog, Badge, Progress, Skeleton, Toast (Sonner)

Navigation

Breadcrumb, Dropdown Menu, Command Palette, Pagination, Popover, Tooltip, Hover Card

Data Display

Avatar, Table, Calendar, Charts (Area, Bar, Line, Pie, Radar, Radial)

Decorative (Neubrutalism Special)

Sticker, Stamp, Sticky Note, Marquee, 35 SVG Shapes

BoldKit Shapes

Interactive Documentation

The BoldKit docs now have a framework toggle. Switch between React and Vue to see code examples for your preferred framework:

  • Every component has Vue source code
  • Every example shows Vue usage
  • Installation commands update automatically

Theming Works the Same

The CSS is identical between React and Vue. All the neubrutalism magic comes from CSS variables:

:root {
  --primary: 0 84% 71%;       /* Coral */
  --secondary: 174 62% 56%;   /* Teal */
  --accent: 49 100% 71%;      /* Yellow */
  --shadow-color: 240 10% 10%;
  --radius: 0rem;             /* Keep it square! */
}

Use the Theme Builder to create custom themes β€” it works for both frameworks.

Why Neubrutalism?

If you’re new to the style, neubrutalism is characterized by:

Neubrutalism Style Demo

  • Thick borders β€” 3px solid borders that define elements
  • Hard shadows β€” Offset shadows with zero blur (4px 4px 0px)
  • Bold colors β€” High-contrast, vibrant palettes
  • Square corners β€” No border-radius allowed!
  • Raw typography β€” Bold, uppercase text for emphasis

It’s the anti-minimalism movement, and it’s perfect for portfolios, landing pages, and apps that want to stand out.

Links

  • 🌐 Website: boldkit.dev
  • πŸ“¦ GitHub: github.com/ANIBIT14/boldkit
  • πŸ“š Docs: boldkit.dev/docs
  • 🎨 Theme Builder: boldkit.dev/themes
  • 🧩 Components: boldkit.dev/components

Contributing

BoldKit is open source (MIT license). If you find bugs, have ideas, or want to contribute components, PRs are welcome!

Whether you’re a React developer or a Vue enthusiast, BoldKit has you covered. Give it a try and let me know what you build!

Drop a ⭐ on GitHub if you find it useful.

Happy coding! πŸš€