Never trust the client with your Stripe price

I was reading a Stripe tutorial last week and watched the author write amount: req.body.amount. That single line lets any user buy Premium for $1. It’s also a common pattern in Stripe Checkout starter code. This post is about why, and how to make it impossible.

The setup

You’re building a paywalled product. You wire up Stripe Checkout, follow a popular tutorial, ship it. Looks great. Tests pass. Users are paying.

Six months later, someone opens DevTools, edits the request body, and pays €1 for your Premium plan. Your Stripe dashboard shows a successful charge. Stripe doesn’t validate your business logic. It charged what it was told to charge. Your database shows a Premium subscription. Your billing logic is doing exactly what you wrote.

This is price tampering. It happens at the one line where the server decides what to charge.

The vulnerable pattern

Here’s the shape of the bug. Paraphrased from a tutorial I won’t link. You’ve seen this shape before:

// app/api/checkout/route.ts (don't do this)
export async function POST(req: Request) {
  const { priceId, amount, plan } = await req.json();

  const session = await stripe.checkout.sessions.create({
    mode: "payment",
    line_items: [
      {
        price_data: {
          currency: "eur",
          product_data: { name: plan },
          unit_amount: amount, // attacker controls this
        },
        quantity: 1,
      },
    ],
    success_url: `${origin}/success`,
    cancel_url: `${origin}/cancel`,
  });

  return Response.json({ url: session.url });
}

The frontend POSTs { priceId: "premium", amount: 2999, plan: "Premium" }. The server passes amount straight into Stripe. Stripe charges what it’s told.

Exploiting this needs nothing fancy:

curl -X POST https://yoursite.com/api/checkout 
  -H "Content-Type: application/json" 
  -H "Cookie: session=..." 
  -d '{"priceId":"premium","amount":100,"plan":"Premium"}'

amount: 100 is €1.00 in cents. Attacker gets a Stripe Checkout link for €1, completes the payment, and your post-checkout webhook hands them Premium.

The same bug shape applies to priceId if you trust it from the client:

// Also bad. Trusting which price the client picked.
const { priceId } = await req.json();
const session = await stripe.checkout.sessions.create({
  line_items: [{ price: priceId, quantity: 1 }],
  // ...
});

If your “Hobby” plan’s priceId is price_xxx_5eur and your “Enterprise” plan’s priceId is price_xxx_500eur, an attacker swaps the value in the request body and pays €5 for Enterprise.

Why this keeps happening

Three reasons it slips through.

1. Most Stripe tutorials are demos. They want to show you Stripe in 50 lines of code, so they wire the frontend straight to the checkout endpoint. Demos become starter templates. Starter templates become production code.

2. The bug looks like working code. Real users complete real payments. Until somebody opens DevTools, you have no signal that anything is wrong. Logs, dashboards, webhooks, all green.

3. Stripe gives you both APIs. price_data (inline price definition) and price (reference to a Price object) live side by side in their docs. Inline price_data has legitimate uses (true dynamic pricing, donations, marketplace splits). But it’s the same shape as the vulnerable pattern, so the bug hides in plain sight.

The fix in one rule

The client tells you which plan the user wants. The server decides what that plan costs.

That’s it. Implementation:

// app/api/checkout/route.ts (server-determined pricing)
const PLANS = {
  hobby: { priceId: process.env.STRIPE_PRICE_HOBBY },
  premium: { priceId: process.env.STRIPE_PRICE_PREMIUM },
  enterprise: { priceId: process.env.STRIPE_PRICE_ENTERPRISE },
} as const;

type PlanKey = keyof typeof PLANS;

export async function POST(req: Request) {
  const { plan } = (await req.json()) as { plan: PlanKey };

  // 1. Validate the plan key against a server-side allowlist
  if (!Object.hasOwn(PLANS, plan)) {
    return new Response("Invalid plan", { status: 400 });
  }

  // 2. Look up the priceId server-side. Never accept it from the client.
  const { priceId } = PLANS[plan];

  const session = await stripe.checkout.sessions.create({
    mode: "subscription",
    line_items: [{ price: priceId, quantity: 1 }],
    success_url: `${origin}/success`,
    cancel_url: `${origin}/cancel`,
  });

  return Response.json({ url: session.url });
}

The client sends { plan: "premium" }. That’s the most they can influence. The mapping from "premium" to a real, server-controlled priceId is unforgeable. If the attacker sends { plan: "free_lifetime" }, the allowlist check rejects it. If they send { plan: "premium", amount: 100 }, the amount field is ignored. It doesn’t exist in the server’s logic.

For genuinely dynamic amounts (donations, custom one-off charges), you compute the amount on the server from inputs you’ve validated:

// Dynamic amount, still server-determined
const { donationCents } = await req.json();

if (
  typeof donationCents !== "number" ||
  donationCents < 100 ||
  donationCents > 100000
) {
  return new Response("Invalid amount", { status: 400 });
}

const session = await stripe.checkout.sessions.create({
  mode: "payment",
  line_items: [
    {
      price_data: {
        currency: "eur",
        product_data: { name: "Donation" },
        unit_amount: donationCents,
      },
      quantity: 1,
    },
  ],
  // ...
});

The user can choose the amount, but only within bounds you’ve defined. They can’t pass unit_amount: 1 if your minimum is 100.

How to verify you don’t have this bug

A two-minute self-audit:

# 1. Open your /pricing page. Click your most expensive plan.
#    Watch the Network tab when you hit "Subscribe" or "Buy".

# 2. Find the request to your checkout-create endpoint. Copy it as cURL.

# 3. Replay it with a tampered body. Change priceId, amount, plan name,
#    quantity, anything money-shaped:
curl -X POST https://yoursite.com/api/checkout 
  -H "Content-Type: application/json" 
  -H "Cookie: <your auth cookie>" 
  -d '{"plan":"premium","priceId":"price_FAKE","amount":1,"quantity":-1}'

# 4. Check the response. If you got a Stripe Checkout URL, open it.
#    If the price shown is anything other than your real plan price, you have a bug.

If the resulting Stripe Checkout page shows the correct, original price regardless of what you sent, you’re safe. If it reflects the tampered fields, patch before you do anything else.

Three more places the same bug hides

Once “the server owns money-shaped values” clicks for you, you start seeing it everywhere.

1. Quantity. Same bug, different field. quantity: -1 in older Stripe versions caused weird negative-amount behavior. Validate quantity bounds explicitly.

2. Coupon / promo codes from client. If you let the client say “apply coupon XYZ,” the server has to verify XYZ is real, active, and applies to this plan for this user. Never just pass it through.

3. Customer ID. If the client sends { customerId } to attach the checkout to an existing Stripe customer, an attacker can swap their customerId for someone else’s. Always derive customerId from the authenticated session on the server.

The pattern: anything that influences money or attribution comes from authenticated server state, not from the request body.

The principle

Stripe is one of the safer payment APIs because it pushes you toward the right patterns most of the time. But it can’t enforce “client doesn’t send money values”. That’s on your code. The same principle applies anywhere the client shouldn’t have authority: authorization roles, feature flags, internal IDs, prices, plan tiers, expiration dates.

Think of a request body as a wish, not a fact. The server decides what to grant.

I run MatchResume.ai, a B2C SaaS with token-based pricing. Exactly the kind of product where this bug would have been embarrassing. The pattern above is what I wish every Stripe tutorial led with, instead of saving it for a footnote.

If you ship paid features and you’ve never tampered your own checkout request as a test, do it tonight. Two minutes, one curl, real peace of mind.

AI Can Write Your Code. But It Can’t Design Your System.

We are living in the golden age of developer productivity. With tools like Copilot and ChatGPT, you can generate hundreds of lines of boilerplate and complex API endpoints in seconds.

It feels like magic. But there is a hidden danger lurking behind that flashing cursor: If you don’t possess foundational architectural knowledge, AI will just help you build a Big Ball of Mud faster than ever before.

The “Junior Developer on Steroids”

Think of AI as the most enthusiastic, tireless, and blisteringly fast Junior Developer you’ve ever managed. It knows the syntax of every language perfectly.

But it has a fatal flaw: It defaults to the easiest path, not the right one.

If you prompt an AI to “write a function to process a user order,” it will happily give you a massive, 300-line controller method. It will hard-code the database connection, mix in the business validation, trigger a third-party payment API synchronously, and tightly couple the entire thing together.

The code will compile. The tests might even pass. But architecturally? It is a ticking time bomb.

Why Foundational Knowledge is Your Superpower

The developers who will thrive in the AI era are not the ones who can type the fastest. The future belongs to the Clarity Engineers—the developers who understand system design, tradeoffs, and architectural boundaries.

When you know software architecture, your relationship with AI completely changes. Instead of accepting its first messy draft, an architected prompt looks like this:

“Write a service class to process user orders. Ensure the core business logic is decoupled from the database using Hexagonal Architecture (Ports and Adapters). The payment processing must not be synchronous; instead, publish a domain event to a message broker so we achieve temporal decoupling.”

Suddenly, the AI isn’t just writing code. It is executing your blueprint.

The Takeaway

AI isn’t going to replace software architects. It is going to make them 10x more powerful. But to wield that power, you need to know the rules of the game so you can instruct the AI on how to play it.

My new book, Grokking Software Architecture (published by Manning Publications Co. ), is the practical, conversational guide I wish I’d had when I started my journey nearly two decades ago. It’s fun, engaging, and filled with information you can start using on DAY ONE in your new job, or starting TODAY at your current job.

Don’t just accept the code the AI hands you. Learn how to hand the AI a blueprint.

Grab your Early Access (MEAP) copy at 🔥 50% OFF today during Manning’s Sitewide Sale: http://hubs.la/Q03-d27Y0

Let’s build systems that last.

Desplegando una página web en Amazon EC2 con Nginx

Creando y desplegando una instancia en Amazon EC2

¿Alguna vez te has preguntado cómo funcionan los servidores en la nube o cómo puedes publicar tu propia página web en internet sin necesidad de tener un servidor físico?

En este laboratorio te guiaré paso a paso en el proceso de creación de una instancia en Amazon EC2, explicando de manera clara cada una de las configuraciones necesarias para que puedas comprender y realizar este proceso sin complicaciones.

Además, no solo nos quedaremos en la teoría: utilizaremos Nginx para desplegar un sitio web real y aprender cómo personalizarlo con nuestro propio contenido, logrando que esté disponible desde cualquier lugar.

Paso 1: Acceder a Amazon EC2

Para comenzar con el lanzamiento de una instancia en Amazon EC2, nos dirigimos al buscador de la consola de AWS y escribimos “EC2”.

Una vez aparezca el servicio, hacemos clic en él para ingresar al panel principal. Allí encontraremos un botón naranja con la opción “Launch instance” (Lanzar instancia), el cual seleccionaremos para iniciar el proceso de creación de nuestra instancia.

Paso 2: Configuración inicial de la instancia

En este paso comenzamos definiendo los parámetros básicos de nuestra instancia en Amazon EC2.

Primero, asignamos un nombre que nos permita identificarla fácilmente. En este caso utilizamos “laboratorio-ec2”.

A continuación, seleccionamos la AMI (Amazon Machine Image), que es la plantilla del sistema operativo que tendrá nuestra instancia. La AMI incluye el sistema base y configuraciones iniciales necesarias para su funcionamiento.

Para este laboratorio, elegimos Amazon Linux, ya que es una opción optimizada para AWS, ligera y ampliamente utilizada en entornos reales.

Utilizamos t3.micro porque es la opción más básica y barata de AWS.

  • Sirve para aprender y hacer pruebas
  • Es gratis en el Free Tier
  • Tiene recursos suficientes para proyectos pequeños

Paso 3: Creación del par de claves

En este paso creamos un par de claves, el cual nos permitirá conectarnos de forma segura a nuestra instancia en Amazon EC2 mediante SSH.

Primero, asignamos un nombre al par de claves para poder identificarlo fácilmente.

Luego, seleccionamos el tipo de clave RSA, ya que es uno de los algoritmos más utilizados y compatibles para la autenticación SSH, ofreciendo un buen nivel de seguridad y facilidad de uso.

En cuanto al formato, elegimos .pem, ya que es el más adecuado para conectarnos desde entornos Linux, macOS o herramientas como Git Bash en Windows, permitiendo usar el comando SSH directamente.

Es importante mencionar que, aunque en este laboratorio se creó el par de claves, no se utilizó durante la conexión, ya que se accedió a la instancia mediante EC2 Instance Connect, una herramienta que permite conectarse directamente desde el navegador sin necesidad de configurar la clave privada. Sin embargo, el uso de claves .pem es fundamental en entornos reales y representa una práctica estándar para conexiones seguras mediante SSH.

Tip importante

Es fundamental descargar y guardar este archivo en un lugar seguro, ya que será necesario para acceder a la instancia. Si se pierde, no será posible conectarse a ella.

Paso 4: Configuración de red

En este paso configuramos las reglas de acceso a nuestra instancia en Amazon EC2 mediante un Security Group, el cual actúa como un firewall que controla el tráfico de entrada.

Para este laboratorio, habilitamos las siguientes reglas:

SSH (puerto 22): permite conectarnos de forma remota a la instancia desde nuestra máquina.
HTTP (puerto 80): permite que el sitio web sea accesible desde el navegador.

Estas configuraciones son fundamentales, ya que sin el acceso por HTTP no sería posible visualizar la página web desplegada.

Con esto terminaríamos la configuración para lanzar nuestra instancia EC2.

Paso 5: Conexión a la instancia

Una vez lanzada la instancia en Amazon EC2, accederemos a la sección de detalles donde encontraremos la opción para conectarnos.

Para ello, seleccionamos la instancia y hacemos clic en “Connect” (Conectar). Dentro de esta sección, nos desplazamos hasta la opción EC2 Instance Connect, que nos permite acceder directamente desde el navegador sin necesidad de configuraciones adicionales.

Finalmente, hacemos clic en el botón “Connect”, lo que abrirá una terminal desde donde podremos interactuar con nuestra instancia.

Paso 6: Actualización del sistema e instalación de Nginx

Este comando permite actualizar el sistema operativo, instalando las últimas versiones disponibles de los paquetes y corrigiendo posibles vulnerabilidades.

sudo dnf update -y

Este comando descarga e instala Nginx en la instancia, dejándolo listo para ser configurado y utilizado.

sudo dnf install nginx -y

Paso 7: Iniciar y habilitar Nginx

Este comando pone en funcionamiento Nginx, permitiendo que el servidor web comience a atender solicitudes.

sudo systemctl start nginx

Esto permite que Nginx se inicie automáticamente cada vez que la instancia se reinicie.

sudo systemctl enable nginx

Paso 8: Obtener la dirección IP pública

Para poder acceder a nuestro servidor web, debemos obtener la dirección IP pública de la instancia en Amazon EC2.

Para ello, nos dirigimos al panel de Instancias, seleccionamos la que hemos creado y buscamos el campo “Dirección IPv4 pública” en la sección de detalles.

Esta dirección será la que utilizaremos en el navegador para visualizar nuestra página web.

Esta es la pagina web que hemos creado

Paso 9: Modificar la página web

Para personalizar el contenido de nuestro sitio en la instancia de Amazon EC2, debemos acceder a la carpeta donde Nginx almacena los archivos web.

Primero, nos dirigimos al directorio correspondiente:
cd /usr/share/nginx/html
Luego, abrimos el archivo principal de la página:

sudo nano index.html

Este archivo contiene el contenido que se muestra en el navegador. Aquí podremos editarlo y reemplazar la página por defecto de Nginx con nuestro propio diseño.

Paso 10: Editar y guardar la página web

Para personalizar nuestro sitio, eliminamos el contenido existente del archivo index.html y lo reemplazamos con el código de nuestra propia página web.

Una vez realizados los cambios, procedemos a guardarlos utilizando el editor nano:

Presionamos Ctrl + X
El sistema nos preguntará si deseamos guardar los cambios (Y/N)
Presionamos Y (Yes)
Finalmente, presionamos Enter para confirmar el nombre del archivo.

Paso 11: Visualizar la página web

Finalmente, para ver el resultado de nuestro trabajo, utilizamos nuevamente la dirección IP pública de la instancia en Amazon EC2.

Ingresamos esta dirección en el navegador:

http://TU_IP_PUBLICA

Y este es el resultado final de nuestra pagina web después de la modificación.

Aprendizaje del laboratorio

En este laboratorio aprendí el paso a paso para lanzar y configurar una instancia en Amazon EC2. También aprendí a conectarme de forma remota con EC2 Instance Connect y a desplegar un servidor web funcional usando Nginx.

Además, comprendí la importancia de los Security Groups para controlar el acceso mediante SSH y HTTP, y cómo la IP pública permite que una página web sea accesible desde internet.

En general, fue una práctica útil para conectar la teoría con la práctica y entender cómo se publica una aplicación en la nube.

Open-source AI I’m watching: DeepSeek V4, VibeVoice, and the n8n effect

Sunday is my day to skim what shipped, note what seems worth going deeper on, and write a short annotated list before the week catches up with me again. This week was genuinely busy: three frontier labs released major models within a 10-day window, a speech model landed quietly from Microsoft, and n8n crossed a milestone that made me rethink some assumptions.

I’m running three AI-curated directory sites built on Astro 5 + Claude Haiku 4.5. These releases matter to me not just as interesting tech but as practical inputs for what I build next.

DeepSeek V4 Preview (April 24)

DeepSeek dropped V4 on April 24: a 1.6T-parameter Mixture-of-Experts model with 49B parameters activated per forward pass, a 1M-token context window, and an MIT license. The V4-Pro and V4-Flash variants are both live via their API, with Pro at $0.30 per million tokens.

What makes this worth watching for me specifically: 49B activated parameters at that price point puts it in direct competition with Claude Haiku 4.5 for content-generation workloads. I haven’t benchmarked it against my actual task — concise, non-hallucinating product descriptions at scale — so I won’t claim it’s better. But the SWE-bench Pro number (81%) is not nothing, and the MIT license means fine-tuning on domain data is an option if I ever have the infrastructure budget for it. I don’t right now. Good to know it exists.

The other thing I’m noting: the 1M-token context window is large enough to feed an entire site’s content into a single prompt. Whether that’s useful for quality or just a headline feature, I’ll know in a month of testing.

GPT-5.5 (April 23–24)

OpenAI also dropped GPT-5.5 on April 23, with API access following the next day. The notable framing from OpenAI: this isn’t a post-training increment. They rebuilt the architecture, the pretraining corpus, and the training objectives from scratch — first time they’ve done that since GPT-4.5.

I’m watching this more cautiously than the benchmark numbers suggest I should. When pretraining changes substantially, so do second-order behaviors: emergent capabilities, failure modes, prompt sensitivities. The leaderboard tells you the headline. It doesn’t tell you how the model behaves when your prompt is ambiguous or your domain is narrow. I’ll wait 30–45 days for the community to find the edges before I run serious evals.

Microsoft VibeVoice (April 29)

Microsoft released VibeVoice on April 29 — a frontier speech AI model, fully open-source, hosted on GitHub. Honest take: I haven’t used it. Speech-to-text isn’t in my current stack at all. But the open-source release is interesting because Microsoft has historically distributed frontier models through Azure, not GitHub.

If it holds up technically, high-quality speech AI joins the list of things you can self-host without paying a cloud API per-minute rate. That matters more for the open-source ecosystem in aggregate than it does for my specific projects. I’m flagging it because the distribution model, not the capability, is what changed.

n8n crossing 180k GitHub stars

n8n crossed 180,000 stars. It’s a workflow automation platform — visual canvas, 400+ integrations, self-hosted, fair-code license, and now with native AI workflow support built in.

Here’s the honest competitive thought this triggered: n8n can do what my GitHub Actions cron pipelines do — scrape, enrich, call Claude, publish — but without writing YAML. If a non-coder can set up an n8n flow that generates content and posts it to Dev.to, the differentiation for my approach has to come from somewhere else: speed, volume, domain-specific prompt quality, site architecture. That’s where I’m trying to compete. The milestone is a useful reminder to be honest about what is and isn’t a moat.

OpenClaw: from 9k to 210k+ stars

OpenClaw is an open-source personal AI assistant that connects to WhatsApp, Telegram, Slack, Discord, Signal, and iMessage. It went from 9,000 to over 210,000 stars in a matter of weeks earlier this year and is still climbing.

I track this not because it’s relevant to my stack, but because the growth curve is its own signal. OpenClaw didn’t solve a new technical problem — it packaged existing capabilities in a way that fit how people already communicate. That’s a distribution lesson, not a model lesson. When I think about what makes a directory site useful rather than just indexed, I keep coming back to the same question: is this packaged where people already are, or does it require them to come to me?

Five things, five different stakes. DeepSeek V4 and GPT-5.5 are direct inputs to infrastructure decisions I’ll make in the next 60 days. n8n is a competitive signal worth taking seriously. VibeVoice and OpenClaw are watching briefs — I’ll check back in 30 days and see if either has changed my thinking.

Part of an ongoing 6-month experiment running three AI-curated directory sites. The technical claims here are real; this article was AI-assisted.

AI in Journalism

I’ve been running an experiment. I wanted to see if AI could generate opinion articles that while written by AI capture my personality and perspectives. My AI Daily News site was initially just a way for me to aggregate news stories about AI into something I could digest in the morning before I started work.

Later I thought I would provide it a range of my prior writing, and to get it to prepare a ‘Opinion’ with my name on the byline. Would it produce something plausibly by me, presenting my views, but on the news of the day?

Sadly I think there has been a fundamental change from the early days of OpenAI models where the results were creative, unpredictable and entertaining. Now they have been trained in such a way to produce the same bland writing style regardless of the instructions you provide.

I got into the habit of waking up each morning, reading the ‘opinion’ and coaching Claude to rework it every day. Why? Because it would write opinions which were conflicting with my own documented views in fundamental ways. It would include the terms I have used, but not internalized the concepts. So each day I would need to correct it.

Multiple news organisations have banned the use of AI in journalism, and now I have experience of why. It isn’t just opinions however; the stories it writes also have opinions injected beyond the facts. At least with the stories I have always linked to the original source material at the bottom, which for most stories is at least two stories.

I am not doing journalism by any measure. Journalism means doing the research, doing the interviews, cross referencing, and creating a cohesive angle for the article. Journalism isn’t unbiased, in that it is influenced by the point of view of the writer, but journalistic integrity still means something.

Does this mean therefore that AI can’t play a part in journalism?
My experience with AI in software has parallels. AI will happily generate code which passes ‘tests’ by altering the tests; in effect changing the conditions of success to please the user. AI does work in software development, but only when you have a framework which prevents this kind of gaming.

People have lost trust in journalism, partially a result of AI Slop, the kind of text which doesn’t differentiate between fact and fantasy. There is also the angst of journalists fearing for their jobs resisting and minimizing the utility of AI. There is a temptation to cite ethics as a reason not to use AI when the real motivation is fear of being replaced.

The answer I think will be to employ the same disciplines that apply to human journalists to AI. That is, checking facts, resisting the temptation to opine, while at the same time creating compelling, entertaining and informing articles.

In my software development AI has become a partner, but not a replacement. It still needs me to apply that discipline to get good results. Just like software, journalism could benefit from AI, but only with stringent disciplines around how it functions.

AI journalism needs to be more than just a way of ripping off the work of actual journalists, rather to engage with the real world, and to be held to the same standards in terms of accuracy. The issue of how AI will impact jobs is a larger issue, but should not be confused with the utility of AI.