Low-Power IoT in the Military Domain: Architecture, Standards, Coatings, and Field Results

Two active military operations are running simultaneously right now, and both are producing a very visible lesson for embedded engineers: the ability to build electronics that operate for months or years on a small battery — without maintenance, without infrastructure — has become a defining characteristic of effective military sensing.

We just finished a 10-post research series on this topic. Here is the condensed version for people who want the engineering substance without wading through 10 articles.

The core architecture: hierarchical power domains

Every ultra-low-power military sensor node — unattended ground sensor, LoRa tactical tracker, soldier-worn biometric node — is built around the same fundamental pattern:

Always-on domain       ~100–500 nA
  └─ wake-up comparator, RTC, PMIC

Intermittent domain    µA range, ms duration
  └─ MCU + ADC + sensor acquisition

On-demand domain       mA range, 100–2000 ms
  └─ LoRa TX, GNSS, camera

The always-on domain gates the intermittent domain via hardware interrupt. The intermittent domain gates the on-demand domain only when there is a decision to transmit. Nothing higher in the stack is ever left drawing quiescent current.

This is not novel — but it is the discipline that separates a node that lasts 3 years from one that lasts 3 weeks.

Practical tip: The STM32WL (first LoRa-on-chip SoC) in deep sleep draws ~1 µA. Add a TPL5110 nano-timer (35 nA) and cut power to the STM32WL entirely between events — you drop system standby to the nano-timer floor. At 35 nA from a 3000 mAh AA lithium cell, theoretical standby lifetime exceeds 9 years.

The DARPA N-ZERO result is the benchmark

DARPA’s N-ZERO programme (2015–2020) set the standard everyone in military sensing is now measured against:

  • Before N-ZERO: unattended ground sensor lifetime = weeks to months
  • After N-ZERO: up to 4 years on a coin cell
  • Battery size reduction: 20× for equivalent lifetime

The mechanism: MEMS-based conditional wake-up receivers that exploit the energy of the incoming signal (acoustic, seismic, RF) to trigger the electronics — rather than running active electronics to wait for a signal. Zero standby power because the wake-up path is passive analog hardware, not running firmware.

Lesson for commercial IoT: the same principle applies. If your event is infrequent, a hardware comparator at 10 nA will always beat a microcontroller polling at 1 mA — by a factor of 100,000.

LoRa in military tactical applications: what the research shows

Four IEEE papers track the trajectory from 2017 to 2025:

2017 — U-LoRa at 433 MHz for soldier tracking: 5 km range in open terrain, 2 km in forest, <1 mA average draw, full node BOM under $15. The 433 MHz choice (vs 868/915) was deliberate — better foliage penetration for infantry in woodland.

2018 — LoRaWAN evaluation for tactical military use: suitable for logistics tracking and environmental sensing; not suitable for real-time sub-second latency requirements under standard LoRaWAN Class A. Conclusion: use the LoRa physical layer with a custom MAC, not LoRaWAN’s civilian protocol stack.

2019 — Cyber perspective: LoRa’s chirp spread-spectrum achieves negative SNR reception (−20 dB at SF12), making signals difficult to detect passively. Narrowband jamming is less effective. But standard AES-128 LoRaWAN keys are insufficient for anything above unclassified — add a hardware secure element (ATECC608B or equivalent) and application-layer AES-256.

2025 — Complete tactical system: LoRa nodes + mobile gateway + encrypted messaging + store-and-forward when backhaul is unavailable. The store-and-forward piece is the one that makes it viable in denied comms environments.

Our own field data: Antarctica

We deployed ThingsLog LPMDL-1105 loggers at the Bulgarian Antarctic Base — a seasonally unoccupied research facility on Livingston Island — for the polar winter of 2024.

The constraint set maps almost exactly to a military unattended sensor network:

Antarctic constraint Military equivalent
No mains power, no solar Denied environment, no resupply
7 months no maintenance access Multi-year UGS deployment
Intermittent Starlink only Degraded comms environment
−28 °C outdoor Arctic theatre
No personnel Unattended operation

Architecture:

  • Sensors acquire 4 channels every 15 minutes → stored to local flash
  • 96 readings (24h) buffered per node
  • Once per day: LoRa gateway powers on, collects all nodes, Starlink terminal powers on, uploads to cloud
  • Gateway and Starlink return to powered-off state

Radio config: SF8 fixed, ADR disabled. Reason: we needed to fit 96 readings into a single payload per daily window. SF8 with our binary protocol fit the payload; SF7 ADR would have dropped us below the required capacity.

Result: Full winter dataset, zero permanent data loss, no maintenance interventions.

The paper: “Deployment of a Low-Power LoRa-Based Monitoring Network for Environmental and Building Condition Assessment in Antarctica”, IEEE CompSysTech 2025.

Protective coatings: the part most IoT engineers skip

This is where field deployments actually fail. The five coating types under MIL-I-46058C / IPC-CC-830:

Type Code Reworkable Temp range Best for
Acrylic AR Yes (solvents) −65 to +125 °C General purpose
Urethane UR With effort −65 to +125 °C Fuel/chemical exposure
Epoxy ER No −65 to +150 °C Potting, permanent installs
Silicone SR Difficult −65 to +200 °C Extreme thermal cycling
Parylene XY No (CVD) −200 to +125 °C Mission-critical, miniature, marine

Parylene is deposited by chemical vapour in a vacuum chamber at room temperature — it penetrates gaps as small as 0.01 mm, is pinhole-free at 0.5 µm, and passes 144-hour salt spray (MIL-STD-810F). It’s on the DoD Qualified Products List under MIL-I-46058C.

The low-power angle nobody mentions: on an uncoated PCB in a humid environment, surface leakage between adjacent conductors can reach 1–100 µA. If your sleep budget is 300 nA, that leakage is 3–300× your entire power budget. Parylene’s moisture barrier eliminates this.

Standards you actually need to know

If you’re building for NATO or US DoD procurement, these are the ones that matter:

Environmental:

  • MIL-STD-810H — the US reference. Not a rating system — a test method library. You select which methods apply based on the platform life cycle.
  • STANAG 4370 / AECTP-200/400/500 — the NATO equivalent. AECTP-200 for climatic, AECTP-400 for mechanical, AECTP-500 for EMC.
  • DEF STAN 00-35 — UK MoD. Broadly equivalent to MIL-STD-810H with UK platform tailoring data.

EMC:

  • MIL-STD-461G — US. CE102/RE102 for emissions, CS116/RS103 for susceptibility.
  • AECTP-500 — NATO equivalent.
  • DEF STAN 59-411 — UK equivalent.

Power:

  • MIL-STD-1275E — 28V DC vehicle bus. Your power supply must survive load dumps to 100V, cold-crank dips to 9V, and reverse polarity to −18V indefinitely.
  • MIL-STD-704F — aircraft 28V DC / 115V AC.

Components:

  • MIL-PRF-38535 Rev N (Feb 2026) — military IC qualification. Class G (COTS-screened, −40 to +85 °C) is the practical entry point for tactical IoT nodes.

Ingress:

  • IEC 60529 IP67 minimum for dismounted infantry equipment. IP68 for buried sensors. IP69K for CBRN decontamination zones.
  • MIL-STD-810H Method 512 (1 m / 30 min) ≈ IP67. Dual-certify both in one test campaign.

The full series

If any section above is relevant to what you’re building, the full posts are on the ThingsLog blog:

  1. Why Low Power Matters in Military Operations
  2. Key Application Domains: UGS, IoBT, LoRa, Wearables, UAVs
  3. How Military Low-Power Electronics Are Built
  4. Protective Coatings: Parylene, Silicone, Epoxy, Potting
  5. Standards: MIL-STD, NATO STANAG, DEF STAN
  6. IP Ratings and Ingress Protection
  7. Case Study: DARPA N-ZERO
  8. Case Study: LoRa Tactical Troop Tracking
  9. Case Study: ThingsLog LPMDL in Antarctica
  10. Case Study: Army CombatConnect

Happy to go deeper on any of the architecture, protocol, or standards topics in the comments.

CSS Breakpoint Units – design with pixels and get fluid UX for free while automatically solving two of the oldest accessibility problems.

I’ve created many NPM packages and invented many hacks focused on helping us all easily create beyond what’s typically considered possible. It drives me.

This idea, I swear was given to me by higher powers and backed with so much excitement and perfectly sequenced synchronicities… Nobody would believe the details.

A new breakpoint-system

Using my tan(atan2()) scalar idea, I take a <length> like 100cqw and turn it into its pixel equivalent as a <number>.

For example, 1rem length becomes the number 16.

Writing up to 10 user-configurable <number> vars as pixel-based breakpoint locations, (maybe called sm, md, lg, etc)

I combine it all with abs(), clamp(), min(), max(), and calc() into registered <integer> results to create bit flags. 0 or 1.

Each flag represents the results of comparing those breakpoints to 100cqw as a number. Is the number less than the breakpoint? Greater than or equal to? In between two of them?

100cqw becomes the width to query against, the bit flag comparisons are the results of each query.

This opens up 3 API paths

calc() switches, @container style() queries, and if(style()) queries.

aspect-ratio: calc(
  var(--qlt-md) * (5 / 7) +
  var(--qin-md) * (4 / 3) +
  var(--qgte-lg) * (16 / 9)
);
.biggest-heading {
  @container style(--qlt-md: 1) {
    font-size: 20px;
  }
  @container style(--qin-md: 1) {
    font-size: 24px;
  }
  @container style(--qgte-lg: 1) {
    font-size: 28px;
  }
}
grid-template-areas: if(
  style(--qlt-md: 1): "a" "b" "c" "d" "e" "f";
  style(--qin-md: 1): "a a" "b c" "b d" "b e" "f e";
  style(--qgte-lg: 1): "a a a a" "b b c d" "b b f e";
);

calc() can do all of these at the same time using keyframes of an always-paused animation, setting all 3 properties in each keyframe.

You choose your API to choose your user reach. The library reach is 91% as of April 2026.

This is the solution to web development.

Designers win. Developers win. Accessibility wins. Your users win.

Even the consciousness of the idea itself wins because it becomes a source of ease for everyone involved.

Everything we’ve come to accept goes away as everything we’ve hoped for takes its place. With less effort.

Automatic accessibility win #1

100cqw does not include the scrollbars, media queries do. If a user’s system accessibility is configured for scrollbars to always be on and maybe even set to a bigger size, historically you deliver your 1024 breakpoint design in an unknowable smaller available space.

For years engineers have been left to work around it without even reliably being able to detect it in CSS, let alone measure it or have the skill to adjust a designer’s vision appropriately for every potential scenario.

With breakpoint-system, scrollbar gutters biting into a 1024 viewport just won’t trip the 1024 breakpoint yet so there’s plenty of room for the smaller design still.

and it gets so much better later because of Breakpoint Units.

Designer and developer wins that are such a relief, your first response may be to doubt it.

We use rem and its decimal fractions everywhere because it is the best solution to support our user’s system font size increases for accessibility.

Just use pixels anyway.

We’re supposed to use em for breakpoints for the same reason, but it’s extremely rare to see it in practice.

No guilt. Keep using pixels.

Designers, devs, and project managers know the max length of our content, especially titles, and we already plan for that as a team.

Keep doing that.

Which of us is responsible for making designs not look terrible, squished, janky, cut-off, overflowing, or just broken when the user’s system font size is 20px – a full 25% bigger?

Pixels don’t do that, so let’s all just use pixels.

We want a fluid design that scales with the view width, respects rem, is aware of the starting size, surrounding breakpoints, and ending size, and isn’t a nightmare to re-calc() if designs change later.

Only the breakpoints you choose to include for designs of this specific layout/component exist as screen sizes.

Like designing physical playing cards. It’s finally safe to ignore all other scenarios and in-between screen sizes.

So pixels are the best choice here too.

Pixels. 🙂

You never need to use rem again.

Breakpoint Units take care of all of this automatically in the background and they map 1:1 with your pixels.

Automatic accessibility win #2

For over a decade authors like Zell Liew have written about the importance of using em in your media query breakpoints yet we have largely collectively avoided it. There are two main reasons for this: pixels are easier to work with, and you need to make adjustments to every breakpoint design and every block of css inside every media query to allow the intended effect without causing weird max caps in the wrong scenarios, etc. Effectively it doubles the effort to be mindful in both design and dev.

Taking a step back for a moment.

When we tan(atan2()) 1rem into a number, and divide it by the expected 16, we get a scalar telling us how much the user has adjusted their system font size for accessibility.

20 / 16 = 1.25

We shift your pixel based breakpoints for this user by that amount.

This is equivalent to what using em in media query breakpoints does.

HOWEVER, when combined with this pixel philosophy of my breakpoint-system and Breakpoint Units, your user will never see text smaller than their preference, your designs won’t break, they don’t have to pinch-pan-zoom, nothing overflows.

Breakpoint Units automatically show the previous design, magnified beyond where it would have changed to a bigger design and flawlessly meets the requirement.

Their experience of your designs is exactly the same as any other user’s experience, just bigger as if the website thought their tablet was a phone and scaled up to fill their screen.

Neither designers nor devs need to plan for this or make extra effort. It happens in the background automatica11y.

The smallest breakpoint has no smaller design to show.

Technically this is a caveat – but it requires no new effort.

The status quo for your smallest designs remains important:
Nearly everything is just stacked in a single column, nothing is vertically constrained, and break word is expected. In your smallest defined breakpoint range only, our typography breakpoint units directly express the user’s magnification.

(Breakpoint Units never actually expand for the system font size outside of that smallest range. The math and UX just makes it seem like that’s the reason they’ve grown to meet the requirement!)

Hands-on learning

@propjockey/breakpoint-system

You will never build a personal site or app again without using breakpoint-system as the foundation once you’ve experienced it. Neither will I.

It has solved so many problems, some over a decade old, for so many different groups of people at once, and packaged the API so well, that any full-feature design system built in the future will either use @propjockey/breakpoint-system directly as a foundation or it must …replicate it to keep up.

Once your team has begun experiencing the zero friction handoffs in addition to all of the relief it delivers on the surface, you will agree with what I already know is true

I have created the new baseline expectation.

https://propjockey.breakpoint-system.com

https://github.com/propjockey/breakpoint-system

https://www.npmjs.com/package/@propjockey/breakpoint-system

Open Contact 👽

Please do reach out if you need help with any of this, have feature requests, or want to share what you’ve created!

PropJockey CodePen DEV Blog GitHub
PropJockey.io CodePen DEV Blog GitHub
Mastodon LinkedIn X Bluesky
Mastodon LinkedIn X Bluesky

My heart is open to receive abundance in all forms,
flowing to me in many expected and unexpected ways.

PayPal Ko-fi Venmo
PayPal Ko-fi Venmo
BTC XRP ETH
BTC bc1qe2ss8hvmskcxpmk046msrjpmy9qults2yusgn9 XRP rw2ciyaNshpHe7bCHo4bRWq6pqqynnWKQg : 459777128 ETH 0x674D4191dEBf9793e743D21a4B8c4cf1cC3beF54
bc1qe…usgn9 rw2ci…nWKQg
: 459777128
0x674…beF54

Angle Computer del B-52: navegación por estrellas antes del GPS

Cuando hoy un avión comercial despega de San Salvador rumbo a Miami, su posición se conoce con precisión de metros gracias al GPS. Pero en 1961 ese lujo no existía: no había satélites de navegación, los sistemas de radio terrestres dejaban de ser útiles sobre el océano y ninguna computadora digital de la época era lo suficientemente pequeña y robusta como para volar en un bombardero. Para resolver ese problema, la Fuerza Aérea de Estados Unidos desplegó en el B-52 Stratofortress un sistema de navegación celestial automatizado cuyo cerebro era una pieza de ingeniería casi mágica: el Angle Computer, una computadora analógica electromecánica que resolvía trigonometría esférica con engranajes, sincros y levas.

El historiador de la computación Ken Shirriff documentó recientemente el interior de uno de estos aparatos y lo que encontró parece más un reloj suizo enloquecido que un periférico militar. Nada gira como un giroscopio; no hay código; no hay lógica digital. Y sin embargo, el Angle Computer era capaz de indicar rumbo con una décima de grado de precisión mientras el avión atravesaba la estratósfera a casi mil kilómetros por hora. Esta es la historia de cómo se hacía navegación celestial antes del GPS, y por qué esa ingeniería sigue siendo relevante para los desarrolladores de hoy.

El problema: volar a ciegas sobre el Ártico

El B-52 entró en servicio en 1955 con una misión estratégica específica: llevar armas nucleares a la Unión Soviética atravesando rutas polares. En ese escenario las ayudas terrestres desaparecían. Las estaciones VOR y los radiofaros quedaban fuera de alcance, y cualquier señal potente que emitiera el propio bombardero revelaría su posición. La brújula magnética, por su parte, se vuelve inútil cerca de los polos, donde las líneas del campo magnético apuntan prácticamente hacia abajo.

La solución clásica existía desde siglos antes: mirar el cielo. Desde la época de los grandes viajes oceánicos, los navegantes calculan su posición midiendo ángulos contra estrellas, el sol o la luna. El problema es que hacer navegación celestial a mano es lento, requiere tablas astronómicas actualizadas y exige condiciones visuales buenas. En un bombardero que cruza el Ártico a diez mil metros, el tripulante necesita una respuesta en segundos, no en minutos. De ahí nació el Astro Compass MD-1: un sistema integral de diecinueve componentes cuyo único trabajo era automatizar el viejo oficio del sextante.

El Angle Computer modelaba físicamente la esfera celeste mediante engranajes.

Qué pasó: Shirriff abre el Angle Computer

A inicios de 2026, Ken Shirriff publicó en su blog un análisis detallado del interior de un Angle Computer recuperado de un B-52 retirado. La unidad mide aproximadamente 30 centímetros de lado y pesa varios kilos. Al quitar la cubierta no aparece un circuito impreso ni una memoria magnética; aparecen ejes, engranajes diferenciales, pequeños motores DC y dispositivos llamados sincros que transmiten ángulos eléctricamente entre subsistemas.

El hallazgo principal es conceptual: el Angle Computer no calcula la posición de una estrella en el sentido algorítmico. La representa físicamente. Dentro de la caja hay un modelo mecánico de la esfera celeste donde un puntero interno ocupa la posición angular del astro que se está rastreando. Cuando el avión cambia de rumbo o de latitud, el mecanismo reacomoda las rotaciones de sus ejes y el puntero se mueve en consecuencia, como si la esfera celeste también girara.

Los ángulos finales —azimut (dirección horizontal) y altitud (ángulo sobre el horizonte)— se leen mediante sincros y se envían por cables al telescopio que está sobre el fuselaje, dentro de una cúpula de vidrio de diez centímetros. Ese telescopio, llamado Astro Tracker, usa un tubo fotomultiplicador para detectar la luz de una única estrella y mantenerla centrada, pese al movimiento del avión. Toda la cadena funciona como un bucle cerrado entre óptica, mecánica y electricidad.

Contexto e historia: del astrolabio al B-52

La navegación celestial no se inventó en la era del jet. Los navegantes polinesios cruzaban el Pacífico siguiendo estrellas desde antes del año 1000. Cristóbal Colón, Magallanes y Cook viajaban con astrolabios y cuadrantes. El sextante, inventado en 1731, se volvió el estándar durante dos siglos. La lógica es siempre la misma: si conozco la hora exacta y mido el ángulo de un astro conocido contra el horizonte, puedo deducir mi latitud y, con más cálculos, mi longitud.

La revolución llegó con tres tecnologías paralelas a mediados del siglo XX: el giroscopio de precisión, que permitía mantener un plano horizontal artificial incluso a bordo de una aeronave en maniobra; el fotomultiplicador, que detecta puntos de luz tenues; y el Air Almanac, una publicación del gobierno estadounidense iniciada en 1941 que tabulaba, cada diez minutos, la posición del sol, la luna, los planetas y una referencia clave llamada First Point of Aries. El Angle Computer fue el pegamento entre esos tres mundos.

📌 Nota: El Air Almanac se publica todavía hoy. La Oficina de Almanaques Náuticos de la Marina de EE.UU. mantiene ediciones digitales y es una de las pocas publicaciones gubernamentales con más de ochenta años de continuidad ininterrumpida.

Datos y cifras del sistema MD-1

  • 19 componentes formaban el Astro Compass completo, repartidos entre amplificadores, computadoras, paneles de control e indicadores en la estación del navegador.
  • 3 estrellas simultáneas podía almacenar el sistema en sus displays de datos estelares, lo que permitía cambiar de referencia con un simple interruptor.
  • 0.1 grados de precisión en el rumbo — suficiente para un vuelo transpolar de diez horas sin ayudas terrestres.
  • 4 pulgadas (10 cm) de diámetro tenía la cúpula de vidrio sobre el fuselaje que protegía al telescopio.
  • 10 minutos era el intervalo de actualización de posiciones en el Air Almanac para el sol, los planetas y la luna.
  • 1941 es el año en que el gobierno de EE.UU. empezó a publicar el almanaque aéreo, dos décadas antes de que el B-52 lo incorporara como parte de su instrumentación.

Impacto y análisis: por qué todavía importa a los desarrolladores

Entender el Angle Computer no es un ejercicio de nostalgia. Tres lecciones siguen vigentes para cualquier ingeniero de software moderno.

1. El hardware especializado vence al propósito general… hasta que ya no

En 1961, ningún procesador digital podía calcular trigonometría esférica en tiempo real dentro de un avión militar. Lo más cercano eran los primeros transistores discretos. Diseñar un mecanismo físico que fuera la ecuación resultaba más rápido, más confiable y menos frágil que intentar una computadora digital. Hoy revivimos esa misma tensión con las GPUs, las TPUs y los chips neuromórficos: cuando el software de propósito general se queda corto, la industria vuelve al hardware especializado.

2. Los sincros son el bus de comunicación olvidado

Un sincro es un pequeño motor eléctrico que, en lugar de mover una carga, transmite un ángulo. Si el eje de entrada gira 30°, el eje del receptor remoto también gira 30°. Con tres cables de alimentación trifásica, dos sincros conectados por un cable forman un canal analógico prácticamente libre de errores. Era, en efecto, un protocolo de comunicación mecánico-eléctrico donde el “paquete” era un ángulo. Entenderlo ayuda a apreciar por qué muchos sistemas industriales actuales, como los servomotores de los brazos robóticos, siguen usando principios análogos.

3. Resiliencia por diseño, no por parche

La navegación celestial sigue teniendo ventajas sobre el GPS: no se puede bloquear, no depende de satélites, no emite señales que delaten la posición del avión. En 2021 la Marina de EE.UU. reintegró la enseñanza formal del sextante en Annapolis, precisamente porque el GPS se volvió un punto único de falla. La lección para los equipos de software es clara: toda dependencia externa es una superficie de ataque, y tener un camino secundario puede valer su peso en oro cuando la nube se cae o el proveedor cambia los términos.

La estación del navegador del B-52 centralizaba los mandos del Astro Compass.

Cómo funcionaba: del cielo al rumbo

Para quienes venimos del software, ayuda pensar el flujo como un pipeline. El navegador elegía una estrella, introducía su Sidereal Hour Angle (SHA) y su declinación, y el Angle Computer traducía esas coordenadas celestes en ángulos locales del avión. Algo así:

flowchart LR;
  A[Air AlmanacSHA + Declinación] --> B[Master ControlPanel];
  B --> C[Angle ComputerModelo mecánico];
  C -->|azimut + altitud| D[Astro TrackerTelescopio];
  D -->|luz de estrella| E[Fotomultiplicador];
  E -->|error óptico| F[Servo-correcciones];
  F --> G[Rumbo 0.1°];

Ese lazo se ejecutaba continuamente mientras durara el vuelo. Si la tripulación quería cambiar de estrella por nubes o por orientación, flipaba un interruptor y el sistema usaba los datos precargados de otra de las tres estrellas disponibles.

Un ejemplo para entender la trigonometría que resolvía

La fórmula clásica de altitud a partir de coordenadas celestes es:

sin(altitud) = sin(latitud) * sin(declinación)
             + cos(latitud) * cos(declinación) * cos(LHA)

Donde LHA es el ángulo horario local. Resolver eso hoy en Python, Go o Rust toma tres líneas:

import math

def altitud(lat, dec, lha):
    lat, dec, lha = map(math.radians, (lat, dec, lha))
    return math.degrees(math.asin(
        math.sin(lat) * math.sin(dec)
        + math.cos(lat) * math.cos(dec) * math.cos(lha)
    ))

print(altitud(13.69, -16.72, 45))  # San Salvador, estrella ejemplo

En 1961 no había math.sin. Esa misma ecuación se implementaba con un tren de engranajes cuyos dientes codificaban ángulos, y con diferenciales mecánicos que sumaban y multiplicaban rotaciones. El puntero interno del Angle Computer era el resultado físico de esa ecuación, actualizado treinta veces por segundo sin una sola línea de software.

💡 Tip: Si querés experimentar con navegación celestial desde LATAM, librerías como astropy (Python), SwissEph o incluso servicios gratuitos como la API del Nautical Almanac permiten calcular posiciones de astros con precisión de segundos de arco desde tu laptop.

Qué sigue: del sextante al sensor cuántico

El sucesor del Angle Computer ya no usa engranajes, pero sigue filosóficamente emparentado. DARPA, la agencia de investigación militar de EE.UU., financia desde 2023 sensores cuánticos de gravedad y magnetismo que permitirían navegar con la misma idea que los viejos bombarderos: midiendo propiedades inmutables del entorno en lugar de depender de señales emitidas por terceros. La Fuerza Aérea británica experimenta con celestial navigation cameras miniaturizadas, del tamaño de una GoPro, que capturan estrellas incluso de día gracias a algoritmos de visión por computadora.

Para un desarrollador en LATAM esto abre oportunidades concretas. Startups regionales de drones agrícolas podrían beneficiarse de un GPS secundario basado en cámara y estrellas, útil cuando el espectro está saturado. Los equipos de ciberseguridad que auditan infraestructura aeroportuaria deberían entender que el jamming de GPS es un ataque realista, y que la redundancia posicional es una pregunta pendiente en muchos despliegues.

💭 Clave: La lección del Angle Computer es que los sistemas que parecen obsoletos sobreviven porque resuelven un problema que el sistema nuevo no resuelve del todo: en este caso, la vulnerabilidad del GPS ante interferencias.

📖 Resumen en Telegram: Ver resumen

Preguntas frecuentes

¿Qué es un sincro y por qué era tan importante?

Un sincro es un transductor electromecánico que convierte la posición angular de un eje en una señal eléctrica de tres fases, y viceversa. Permite que dos mecanismos ubicados en partes distintas de una aeronave giren exactamente el mismo ángulo con solo un cable entre ellos. En el Astro Compass, los sincros llevaban los ángulos calculados por el Angle Computer hasta el telescopio sobre el fuselaje.

¿Por qué no usaron una computadora digital en 1961?

Las computadoras digitales de la época ocupaban habitaciones enteras, disipaban kilovatios y eran sensibles a vibraciones y radiación cósmica. El Angle Computer, en cambio, cabía en un espacio reducido, funcionaba con electricidad estándar del avión y sobrevivía a aceleraciones de varios G. El primer ordenador digital que voló fue el del Apollo Guidance Computer, varios años después.

¿Se sigue enseñando navegación celestial hoy?

Sí. La Academia Naval de EE.UU. reintrodujo el sextante en su plan de estudios en 2011 y varias marinas de guerra, incluyendo las de países OTAN, mantienen cursos obligatorios. La motivación principal es tener un sistema de respaldo si el GPS cae o es jammed durante operaciones.

¿Puede un desarrollador latinoamericano experimentar con esto en casa?

Perfectamente. Con una Raspberry Pi, una cámara IMX477 y la librería astropy en Python se puede construir un rastreador de estrellas amateur. Proyectos open source como Astrometry.net permiten identificar campos estelares automáticamente. No da 0.1° de precisión como el B-52, pero sí para entender el principio.

¿El Angle Computer todavía existe en B-52 activos?

No. La flota moderna del B-52 usa sistemas de navegación inercial actualizados más GPS, desde finales de los años 80. Las unidades originales del MD-1 se encuentran hoy en museos de aviación o en manos de coleccionistas privados, que en muchos casos las restauran para exhibición.

¿Qué relación tiene esto con la computación cuántica o la IA actuales?

Ambas áreas vuelven a apoyarse en hardware específico —qubits, TPUs, chips neuromórficos— porque las ecuaciones que quieren resolver no encajan bien en CPUs generales. Es la misma lógica que llevó a construir el Angle Computer: cuando el software genérico no da, se diseña una máquina a medida del problema.

Referencias

  • Righto.com — The electromechanical angle computer inside the B-52 bomber’s star tracker — Análisis técnico original de Ken Shirriff con fotografías del mecanismo.
  • Wikipedia — Celestial navigation — Introducción general al método, fórmulas y contexto histórico.
  • Wikipedia — Boeing B-52 Stratofortress — Historial operacional del bombardero y evolución de su aviónica.
  • Wikipedia — Synchro — Principios de funcionamiento de los transductores sincro usados en aviónica militar.

📱 ¿Te gusta este contenido? Únete a nuestro canal de Telegram @programacion donde publicamos a diario lo más relevante de tecnología, IA y desarrollo. Resúmenes rápidos, contenido fresco todos los días.

I’ve built auth six times. Here’s the system I would build today

I am writing this series because I have built authentication from scratch six times, and every time I got it about 80 percent right and discovered the last 20 percent at 2am when a user said something weird happened.

The point is not to talk you into rolling your own auth. Most of the time you should not. The point is that if you are going to, or if you are trying to read the source of a library that does it for you, you should see all the moving parts laid out. Once you see them, you can decide whether to maintain them yourself or let kavachOS do it.

What we are building

A Next.js 15 app with Postgres. Email and password, magic link, Google and GitHub OAuth, passkeys, rate limiting, session rotation, password reset, email verification, and an agent token system for AI scripts that call your API on behalf of users.

By the end of the series you will have either:

  1. Built the whole thing yourself and know exactly what every line does
  2. Looked at the “and the same thing in kavachOS” sections and decided your weekend is worth more than an auth rewrite

Both are fine. I do both depending on the project.

The series in one diagram

                         +-------------------+
                         |   Browser / App   |
                         +---------+---------+
                                   |
                                   | HTTPS
                                   v
                    +----------------------------+
                    |   Next.js app (App Router) |
                    |   /app/auth/*   UI pages   |
                    |   /api/auth/*   endpoints  |
                    +------+---------------+-----+
                           |               |
                  session  |               | outbound email
                  cookie   |               v
                           |          +---------+
                           |          |  Resend |
                           v          +---------+
                  +--------+---------+
                  |   Postgres       |
                  |   users          |
                  |   sessions       |
                  |   oauth_accounts |
                  |   reset_tokens   |
                  |   magic_tokens   |
                  |   verify_tokens  |
                  |   passkeys       |
                  |   agent_tokens   |
                  +--------+---------+
                           |
                           | read via
                           v
                  +------------------+
                  |  Redis / KV      |
                  |  rate limits     |
                  |  active session  |
                  |  lookups         |
                  +------------------+

That is the whole thing. Nine tables, one cache, two network dependencies (Postgres and email), one frontend. Every article in the series fills in one box.

What each article covers

# Article What ships
01 You are here Architecture, schema preview, series map
02 Database schema SQL for all 8 tables, index choices, the drizzle schema
03 Register user Signup form, password rules, hash choice, email verification trigger
04 Login Form, session cookie, CSRF token, remember me, timing defense
05 Password reset Token generation, one time use, session rotation on success
06 Email verification Sending, verifying, re-sending, bounce handling
07 Magic link login Passwordless token flow with 10 minute expiry
08 OAuth (Google, GitHub) PKCE, state, callback validation, account linking
09 Passkeys WebAuthn registration and assertion with a password fallback
10 Rate limiting Login attempts, email enumeration defense, CAPTCHA gating
11 Agent tokens / MCP OAuth Minting scoped tokens for scripts and AI agents
12 Deploy Cloudflare Workers with D1 and Durable Objects

The tables at a glance

I will spend article 02 on each of these. If you are following with kavachOS rather than building by hand, you do not write the DDL: pnpm kavachos migrate creates and maintains all of these for you. Article 02 still exists because it is worth knowing what is in your database, whichever library you use.

Here is the preview so you have the shape in your head:

users                       sessions
+--------------+            +------------------+
| id           |<-----+     | id               |
| email        |      +-----| user_id          |
| password_hash|            | token_hash       |
| email_verif. |            | expires_at       |
| created_at   |            | last_used_at     |
+--------------+            +------------------+

oauth_accounts              password_reset_tokens
+--------------+            +------------------+
| id           |            | id               |
| user_id      |------+ +---| user_id          |
| provider     |      | |   | token_hash       |
| provider_uid |      | |   | expires_at       |
| access_token |      | |   | used_at          |
+--------------+      | |   +------------------+
                      | |
magic_link_tokens     | |   email_verification_tokens
+--------------+      | |   +------------------+
| id           |      | |   | id               |
| user_id      |------+ +---| user_id          |
| email        |      | |   | token_hash       |
| token_hash   |      | |   | expires_at       |
| expires_at   |      | |   | used_at          |
+--------------+      | |   +------------------+
                      | |
passkeys              | |   agent_tokens
+--------------+      | |   +------------------+
| id           |      | |   | id               |
| user_id      |------+ +---| user_id          |
| credential_id|            | token_hash       |
| public_key   |            | permissions[]    |
| counter      |            | expires_at       |
+--------------+            +------------------+

If you squint, it is 8 tables with one join key. That is on purpose. Boring schemas age well.

The security properties the system has to maintain

Every article will reference this list. Tape it to your monitor.

  1. No account enumeration. /login, /forgot-password, /register return the same response for “user exists” and “user does not exist”. Timing differences on those endpoints should be under 50ms.
  2. No token reuse. Every reset link, magic link, and verification link is one time use. The check is a used_at IS NULL clause, not an application-level flag.
  3. Short expiries. Reset and magic link tokens expire in 15 minutes. Email verification in 24 hours. Sessions can run long (30 days) because they are revocable.
  4. Hashed token storage. Every token in the database is the SHA-256 of the raw value. The raw value exists only in the email or URL.
  5. Session rotation on privilege change. Password reset, email change, and OAuth unlink all invalidate every existing session for that user.
  6. Rate limits on every unauthenticated endpoint. Login, forgot-password, register, magic link request, verification resend.
  7. CSRF on session-carrying endpoints. Double submit cookie or SameSite=Lax + custom header.
  8. Structured audit logs. auth.register.success, auth.login.failure, auth.session.rotated, and so on. You will want these the first time a user claims they did not do something.

The trade-off you are making by doing this yourself

Writing this code is not that hard. Maintaining it is. Every one of these articles represents 2 to 6 hours of initial implementation plus a long tail of reading advisories, upgrading libraries, and handling weird bugs.

The rough math:

DIY auth
  initial build:       60 hours of focused work
  annual maintenance:  30 to 80 hours
  incident cost:       unknown, but nonzero

Managed (Auth0, Clerk)
  initial build:       4 hours
  annual cost:         $1,000 to $50,000 depending on MAU
  incident cost:       their team handles it

Open source library (kavachOS, Better Auth)
  initial build:       4 to 8 hours
  annual maintenance:  5 to 15 hours (upgrades)
  incident cost:       you handle it, but the code is readable

If you are a solo dev with a side project, the library path is usually right. If you are at a company with a security team that wants to own the code, DIY is reasonable. If you are between those, a managed service buys you time.

I use kavachOS because I want to own the code but not write it. Your answer will depend on your context.

How to follow along

Pick one of these three paths:

Path A: read along, do not build. You will still get value from seeing how the pieces fit.

Path B: build with me from scratch. Clone the starter repo at github.com/kavachos/nextjs-auth-from-scratch. Each article has a matching branch.

Path C: build with kavachOS and skim the DIY parts. Run:

pnpm create next-app@latest my-auth-app --typescript --app --tailwind
cd my-auth-app
pnpm add kavachos @kavachos/nextjs

Then follow the “kavachOS version” section at the bottom of each article.

What you need installed

Before article 02, have these ready:

node --version   # 20 or higher
pnpm --version   # 9 or higher
psql --version   # 15 or higher, or a Neon/Supabase URL

A Resend account (or any SMTP provider) for email. An Upstash Redis instance or Cloudflare KV namespace for rate limits. That is it.

Why I am writing this as a series instead of one mega post

Two reasons.

One, auth is broken up into real sub-problems that each deserve their own treatment. Jamming all of it into one 40,000 word post means nobody reads it.

Two, I want to write for 12 days straight. Forcing publication at this cadence means I cannot perfect anything, which is good. The first draft of a system is more honest than the polished one.

Next up

Article 02: the database schema. Every table, every index, every column nobody thinks about until they get a support ticket at midnight. We will also look at why text beats varchar on Postgres, why id should be a bigserial and not a uuid in most cases, and how to make your email column case-insensitive without hating yourself.

See you tomorrow.

Comment with the article you would most like me to skip ahead to. If enough people ask for the same one, I will reorder the series.

Tags: #authentication #webdev #nextjs #tutorial

Half Your Free Trial Signups Are Fake: Here’s How to Fix It

The Problem Every SaaS Faces

Fake signups from disposable emails are killing your metrics. Someone signs up with throwaway@mailinator.com, uses your free trial, and vanishes.

Why It Matters

  • Wastes customer success time
  • Inflates your user count (fake growth)
  • Costs you money (server resources, email sends)
  • Makes your analytics useless

The Solution: Email Risk Scoring

Instead of just validating syntax, check:

  1. Is it from a disposable provider? (mailinator, guerrillamail, etc.)
  2. Does the domain have MX records?
  3. Free provider vs business email?

What Good Email Validation Detects

A proper email risk scorer checks multiple signals:

  • Disposable domains: Is it from Mailinator, Guerrilla Mail, or 500+ other temporary providers?
  • MX records: Can the domain actually receive emails?
  • Provider type: Free (Gmail/Yahoo) vs Business (company.com)?
  • Role-based emails: Generic addresses like info@, admin@, noreply@
  • Risk level: Low, Medium, or High

How to Implement It

Option 1: DIY Solution (Free, But Manual)

# Use the GitHub disposable-email-domains list
import requests

DISPOSABLE_DOMAINS = requests.get(
    "https://raw.githubusercontent.com/disposable-email-domains/disposable-email-domains/main/disposable_email_blocklist.conf"
).text.split()

def is_disposable(email):
    domain = email.split("@")[1]
    return domain in DISPOSABLE_DOMAINS

Option 2: Use a Ready-Made API (Faster Setup)

Or skip all that complexity and use a ready-made solution.

I built this to solve the problem: https://apify.com/mayno/email-risk-scorer

What you get out of the box:

  • 500+ disposable domains detected (Mailinator, Guerrilla Mail, YOPmail, etc.)
  • MX record validation – checks if the domain can actually receive emails
  • Email provider identification – detects Google Workspace, Microsoft 365, Zoho, etc.
  • Free vs business email classification – Gmail/Yahoo vs company domains
  • Role-based email detection – flags generic addresses (info@, admin@, noreply@)
  • Automatic syntax validation – RFC-compliant email format checking
  • Risk level scoring – returns low/medium/high based on multiple signals
  • Batch processing – validate up to 10,000 emails per run
  • No maintenance – disposable domain list updated automatically
  • Fast processing – handles 1,000 emails in ~3-5 seconds

Setup time: 5 minutes. Here’s the complete implementation for an Express.js signup route:

import { ApifyClient } from 'apify-client';

const apifyClient = new ApifyClient({ token: 'YOUR_APIFY_TOKEN' });

app.post('/signup', async (req, res) => {
  const { email, password } = req.body;

  // Check email risk (one API call)
  const run = await apifyClient.actor('mayno/email-risk-scorer').call({
    emails: [email]
  });

  const results = await apifyClient.dataset(run.defaultDatasetId).listItems();
  const emailRisk = results.items[0];

  // Block disposable emails
  if (emailRisk.isDisposable) {
    return res.status(400).json({
      error: 'Disposable email addresses are not allowed'
    });
  }

  // Block high-risk emails
  if (emailRisk.riskLevel === 'high') {
    return res.status(400).json({
      error: 'This email address cannot be used for registration'
    });
  }

  // Optional: Flag free email providers for review
  if (emailRisk.isFreeProvider && !emailRisk.hasMXRecord) {
    // Log for manual review
    console.log('Suspicious signup:', email);
  }

  // Proceed with signup
  await createUser(email, password);
  res.json({ success: true });
});

Example API response:

{
  "email": "test@mailinator.com",
  "isValid": true,
  "isDisposable": true,
  "isFreeProvider": false,
  "isBusinessEmail": false,
  "isRoleBasedEmail": false,
  "domain": "mailinator.com",
  "hasMXRecord": true,
  "mxProvider": "mail.mailinator.com",
  "riskLevel": "high",
  "reasons": ["Disposable/temporary email provider"]
}

Why this beats the DIY approach:

  • No code to maintain (I handle updates)
  • No DNS infrastructure to manage
  • No edge cases to debug
  • No weekly domain list updates
  • Works day one, scales to millions

Time saved: ~6 hours initial setup + 1-2 hours/month maintenance = ship faster, focus on your product

Conclusion

Pick what works for your stage:

  • Early/MVP: DIY approach
  • Scaling: Use an API

Try It Yourself

Want to test it out? The actor is live on Apify with a free tier: https://apify.com/mayno/email-risk-scorer

Questions? Feedback? Drop a comment below. I’m actively working on this and would love to hear what features would be most useful!

Embarrassment is cheap. Token spend isn’t.

I was in a meeting today.

The team was walking me through a new feature. I was nodding. I’d used the word “endpoint” correctly twice. I was feeling sharp.

Too sharp.

Because my brain… a peaceful place where every feature is shaped like the last feature that worked… decided: oh, this is probably another markdown file.

It was not another markdown file.

It was parallel multi-agent recursive language model creation.
I didn’t know that yet. I leaned back. I smiled. I said it.

“Should be quick, eh?”

Silence. Then laughter. Not polite laughter. Not boss-is-trying laughter. The full kind. The kind that’s been building up for a while.

And I was fine with it.

Actually I was more than fine. Embarrassment is an underexplored emotion in founder life. The best version of me is the one who stays in the room after saying something dumb — not the one who stops asking questions to protect the illusion.

The lead engineer didn’t laugh. He looked at me the way doctors look at patients who have just confidently explained that their cold is probably bacterial.

“It’s not quick.”

“Oh. Why?”

He explained. This was theoretical. There was one research paper on it. One. If we pulled it off, we’d be doing something that hadn’t been done before.

And then I asked the second dumb question. The one the first dumb question unlocks. Something about how much this was going to cost to run.

The room got quieter in a different way.

We spent the next twenty minutes on token consumption. Whether the recursion depth could be capped. Which agents actually needed to talk to which. Stuff the team would have gotten to eventually — but not in that meeting, not in that order.

Here’s the thing.

“Should be quick” was the wrong thing to say. But me-saying-the-wrong-thing-out-loud turned out to be the right thing in the room. If I’d protected my pride, I’d have nodded through a plan that burned a lot of tokens.

So I’m going to keep doing it. I’m going to keep mistaking research papers for markdown files. I’m going to keep being the guy who says “should be quick” five minutes before the engineers collectively grieve.

_Embarrassment is cheap. Token spend isn’t.
_

Progress. Velocity.

Proven SSL Certificate Renewal Steps to Protect Your Site

Originally published at https://monstermegs.com/blog/ssl-certificate-renewal/

If you have issued a new SSL certificate for your website since March 15, 2026, it is already set to expire sooner than you might expect. On that date, the maximum validity period for any newly issued TLS certificate dropped from 398 days to 200 days – the first stage of a sweeping change approved by the CA/Browser Forum in April 2025. The change makes SSL certificate renewal roughly twice as frequent as it was just weeks ago, and the timeline will keep tightening through 2029. For site owners still relying on manual processes, this is not a future problem. It is an active one.

What Changed on March 15, 2026

The CA/Browser Forum is the industry body that governs how SSL and TLS certificates are issued, validated, and trusted by browsers worldwide. On April 11, 2025, it passed Ballot SC-081v3 – a measure to progressively shorten certificate lifetimes over the next three years. The ballot passed with 29 votes in favour and zero opposed, making it one of the most decisive rulings in the Forum’s history. No certificate authority or browser vendor dissented.

The first enforcement milestone arrived on March 15, 2026. Any certificate issued from that date forward carries a maximum validity of 200 days. Certificates issued the day before the cutoff could still carry a full 398-day lifespan. The gap between those two is significant for any administrator managing SSL certificate renewal manually or through ad-hoc calendar reminders, because the renewal window just halved without warning for anyone not paying close attention.

Why SSL Certificate Renewal Has Become Urgent

Before March 2026, most hosting customers and site administrators approached SSL certificate renewal as a roughly annual task – one automated reminder email, one click, done for another year. That rhythm is now broken for anyone issuing new certificates under the current rules. SSL certificate renewal is required at minimum every six months starting today, and the schedule compresses further in the years ahead. By March 2027, the maximum drops to 100 days. By March 2029, it falls to 47 days.

The reasoning behind the change is straightforward. The CA/Browser Forum argues that shorter certificate lifetimes reduce the risk window when a private key is compromised or a certificate is incorrectly issued. Under a 398-day validity window, a mis-issued or stolen certificate could remain trusted by browsers for over a year before it would naturally expire. A 47-day cap cuts that window to less than two months. In this framing, SSL certificate renewal is not merely an administrative obligation – it is a security mechanism with a direct impact on how long threats can persist undetected.

The Three-Stage Timeline From 200 to 47 Days

The ballot was structured as a phased rollout deliberately, giving certificate authorities and website operators time to adapt their SSL certificate renewal infrastructure before the most aggressive requirements take effect.

The Phase-by-Phase Reduction Schedule

Phase one is now active. From March 15, 2026, newly issued certificates cannot exceed 200 days. This supports a twice-yearly SSL certificate renewal cadence that is difficult to manage without automation but not impossible with good tooling and clear alerts.

Phase two arrives in March 2027. The maximum shrinks to 100 days, shifting SSL certificate renewal to a quarterly cycle. At this frequency, a single missed reminder can leave a certificate expiring within weeks, with no buffer time to troubleshoot problems or wait for CA processing.

Phase three lands in March 2029. The 47-day cap means SSL certificate renewal must take place roughly every five to six weeks for every domain you operate. No realistic manual workflow can sustain that across a portfolio of any meaningful size. Automation is not just advisable at that stage – it is the only viable approach.

SSL certificate renewal timeline showing 200-day, 100-day, and 47-day validity phases with glowing padlock icons and countdown timers on a server dashboard

Who Pushed This Change and Why

Apple was the primary sponsor of Ballot SC-081v3. The company has consistently led industry efforts to shorten certificate lifetimes, previously driving the reduction from five years to one year and then to the 398-day ceiling that just expired. Apple’s argument has remained consistent throughout: the longer a certificate remains valid without re-verification, the higher the probability that the domain ownership information it carries is no longer accurate or that the underlying private key has been exposed. Shorter SSL certificate renewal intervals keep that verification data current.

Google, Mozilla, and Microsoft all voted in favour. That cross-browser consensus matters because it signals that all major trust stores will enforce the new limits – there is no path for a CA to issue a longer-lived certificate and have it trusted. DigiCert, Sectigo, GlobalSign, and Let’s Encrypt also supported the ballot, suggesting the industry views the operational burden of more frequent SSL certificate renewal as an acceptable trade-off for a meaningfully more secure web.

Certificate Authorities Are Now Adapting

The immediate challenge falls on certificate authorities and the businesses that depend on them. DigiCert has published detailed guidance indicating that organisations relying on manual certificate management need to approximately double their SSL certificate renewal workload under the 200-day rule alone. For enterprises with hundreds of certificates spread across subdomains, load balancers, APIs, and application servers, the additional overhead is significant.

Domain validation reuse periods have also been tightened under the same ballot. Previously, a certificate authority could reuse a completed domain validation check for up to 825 days. That window has been shortened in parallel with the certificate lifetime changes, meaning SSL certificate renewal now requires more frequent re-verification of domain ownership – not just the generation of a new certificate from an existing validated record.

In response, major CAs are expanding their certificate lifecycle management platforms. DigiCert’s CertCentral, Sectigo’s Certificate Manager, and similar enterprise tools are all being updated to support automated SSL certificate renewal at scale, with API-driven workflows that eliminate the need for human intervention at each renewal cycle.

Automating SSL Certificate Renewal With ACME

The ACME protocol – Automatic Certificate Management Environment – was built precisely for a moment like this. Standardised by the IETF as RFC 8555, ACME allows web servers to request, validate, and install TLS certificates programmatically, with no human involvement required. Let’s Encrypt built its entire free certificate service around this protocol and has been providing automated SSL certificate renewal since 2016. For sites already using Let’s Encrypt, renewal happens silently every 60 to 90 days via tools like Certbot, acme.sh, or the AutoSSL feature available in cPanel-based hosting environments.

Let’s Encrypt and the Rise of Auto-Renewal

Let’s Encrypt certificates already max out at 90 days – comfortably within every phase of the CA/Browser Forum’s new timeline, including the 47-day cap that takes effect in 2029. Websites running on Let’s Encrypt with a functioning ACME client are already compliant with requirements that will not become mandatory for another three years. Their SSL certificate renewal workflows require no immediate changes.

The larger disruption hits organisations that have historically used commercial certificates with annual or 13-month validity periods, renewed manually or through a loosely maintained script. For those operators, the question has shifted from whether to automate SSL certificate renewal to how quickly they can make the transition. Enterprise certificate lifecycle management tools from vendors like Venafi, AppViewX, and Keyfactor are seeing heightened interest as a result. cPanel and DirectAdmin hosting panels are also improving their built-in renewal automation to reduce reliance on manual intervention. If you want to see what fully managed SSL certificate renewal looks like in a shared hosting environment, the SSL certificate options at MonsterMegs include AutoSSL with Let’s Encrypt on every plan.

What Site Owners Should Do Right Now

The March 15 change is already in effect. If you have issued a new certificate since that date, your SSL certificate renewal deadline is closer than it would have been under the old rules – 200 days from issuance rather than nearly 13 months. The first priority is confirming that your certificates are configured for automatic renewal. On cPanel-based hosting, check the AutoSSL settings under the SSL/TLS section and verify that the renewal daemon is active and completing jobs successfully.

For sites using commercial certificates from paid CAs, contact your provider and ask specifically about their automated SSL certificate renewal APIs or management portal options. Most major CAs now offer tooling that integrates with common deployment pipelines. Moving to automation is a direct and proportionate response to the CA/Browser Forum’s updated rules – not a premature upgrade.

Sites that handle customer transactions, store personal data, or run e-commerce operations face the most serious consequences from a missed SSL certificate renewal. An expired certificate does not only produce a browser warning – it actively breaks HTTPS, destroys visitor confidence, and can interrupt checkout flows entirely. The risk profile of getting this wrong is higher today than at any point in recent history. For a broader look at how server-level security decisions stack up, the post on PHP hosting security risks covers several related areas where neglected maintenance creates compounding exposure.

The Bottom Line

The CA/Browser Forum’s unanimous ruling is now the enforced standard for the web. The 200-day SSL certificate renewal requirement has been active since March 15, 2026. The 100-day limit arrives in March 2027, and 47 days follows in March 2029. Anyone still running manual SSL certificate renewal processes needs to treat automation as an infrastructure priority, not something to revisit later.

The tools to make SSL certificate renewal seamless already exist and are widely available – Let’s Encrypt and Certbot are free, ACME support is built into most modern hosting control panels, and enterprise-grade lifecycle management platforms are maturing quickly. The cost of getting this wrong is a broken HTTPS connection, a browser security warning, and lost visitor trust. If you are evaluating hosting that handles SSL certificate renewal automatically and keeps your site secure by default, MonsterMegs web hosting plans include AutoSSL through Let’s Encrypt on every account.

LinkedIn or LinkeDone?

Nine months of silence is a long time to spend shouting into a void.

Since last July, I have been a full-time participant in the current job market. My background isn’t thin: it’s a ten-year portfolio of coding that includes six years of self-taught grit, 2.5 years of professional full-stack experience across multiple companies, and the ongoing management of a live production site. I’ve even sat in the lead chair for a startup, handling everything from embedded software to team management and architecture decisions.

Yet, despite a decade of technical work, a small portfolio, and consistent applications for roles I have already performed, the result is a perfect zero. Not a single interview request in 270 days.

The Audit

When logic fails, you start running experiments. To see if the issue was my approach or the platform itself, I treated my search like a technical audit:

The Rebuild: I scrapped my CV and built new versions from the ground up, matching the specific vernacular of each job description.

The Stress Test: I even tested the “validity” of the listings by slightly embellishing qualifications to create a near-perfect match for the algorithms.

The response remained unchanged: absolute silence. It is a strange reality when a candidate with high-stakes production experience cannot even trigger a screening call.

A Professional Marketplace?

It forces a necessary question: Is LinkedIn still a job board, or has it transitioned into just another social media platform for sharing opinions, accolades, and memes?

The math doesn’t seem to add up. It leads one to wonder if many of these “openings” are actually legitimate opportunities. Are we looking at ghost roles—corporate theater used to project an image of growth and strength while hiring is effectively frozen?

The Documented Reality

This isn’t a hot take; it’s a record of a broken feedback loop. When 270 days of consistent effort from an experienced developer yields zero engagement, the system has drifted far from its original purpose.

I am documenting this simply to ask: is this the new standard? If the primary goal of connecting talent with work is no longer being met, is it time to consider a major migration toward something that actually works?

Has anyone else met this level of silence?

Security Issue in YouTrack (CVE-2026-33392): Upgrade Recommended for Server Versions Before 2025.3.132953

A security vulnerability in YouTrack came to light in March 2026, and we fixed it immediately. Most of you don’t need to do anything, but we want to keep you informed. For most YouTrack administrators, this is purely an informational post.

  • We have already upgraded YouTrack Cloud to a new version.
  • YouTrack Server instances on version 2025.3.132953 or later are not affected.

Action required from YouTrack Server administrators

If you are running YouTrack Server on a version older than 2025.3.132953, we recommend upgrading to any version available to you, starting from 2025.3.132953, as soon as possible.

You can check your current version in Administration | Server Settings | Global Settings. To see which versions are available under your license, check the License Details section in the settings or visit your JetBrains Account. To upgrade, download the latest available version from the YouTrack download page, or pick a specific build from the previous versions page. For upgrade instructions, refer to the Installation and Upgrade documentation.

The vulnerability: summary

In March 2026, a security researcher from the Hacktron AI team identified a vulnerability and reported it to us through our coordinated disclosure policy. The core issue was a sandbox bypass that could allow code execution and required administrator-level permissions to exploit.

The vulnerability has been assigned the identifier CVE-2026-33392. It affected all YouTrack versions before 2025.3.132953.

The impact was most significant in YouTrack Cloud, allowing bypassing the cross-tenant isolation boundaries for tenants sharing the same hardware.

YouTrack Server is a single-tenant solution, meaning that it’s not possible to access anything that does not already belong to the server owner. At the same time, the vulnerability requires administrative permissions to exploit.

Mitigation

We implemented mitigation measures within 48 hours of receiving the report.

YouTrack Cloud servers were patched, and we have found no evidence that the vulnerability was ever exploited.

For YouTrack Server, the fix is included in version 2025.3.132953 and all later versions. There are no tenant boundaries in YouTrack Server, but the vulnerability may still allow permission escalation within administrative roles.

Security bulletin

A complete list of recently fixed security issues is available on the Fixed Security Issues page on the JetBrains website. You can also subscribe to receive email notifications about security fixes across all JetBrains products.

Frequently asked questions

Which versions are affected?

All YouTrack versions before 2025.3.132953 were affected.

Is the YouTrack Server affected?

Yes, but to a much lesser extent than YouTrack Cloud. YouTrack Server is a single-tenant solution, so there are no cross-tenant boundaries at risk. The vulnerability requires administrative permissions to exploit and may allow permission escalation within administrative roles. If you are already on version 2025.3.132953 or later, no action is needed.

Was my data compromised?

We have found no evidence that the vulnerability was ever exploited in any environment.

Support

If you have any questions regarding this issue, please get in touch with the YouTrack Support team.

Your YouTrack team

I Built a Real-Time Multilingual Dubbing Platform and Used TestSprite MCP to Test It

What if you could speak, and everyone listening heard you in their own language, with no noticeable delay?

That question turned into PolyDub.

What It Does

Three modes:

  • Live Broadcast: one speaker, listeners worldwide, each hearing a dubbed stream in their language
  • Multilingual Rooms: everyone speaks their own language, everyone hears everyone else in theirs
  • VOD Dubbing: upload a video, download a dubbed MP4 with SRT subtitles

The real-time pipeline:

Mic -> WebSocket -> Deepgram Nova-2 (STT) -> Google Translate (~300ms) -> Deepgram Aura-2 (TTS) -> Speaker

Perceived latency is around 1.2 to 1.5 seconds. Fast enough for a real conversation.

Landing Page

A Few Decisions Worth Explaining

Why Google Translate instead of Lingo.dev for real-time? Lingo.dev is LLM-based, which means 5 to 8 seconds of latency. Fine for batch work, not for live speech. Google’s gtx endpoint runs at 250 to 350ms warm. Lingo.dev is still in the project, compiling UI strings at build time across 15 locales.

Why Deepgram Aura-2? Aura v1 only shipped English voices regardless of the language param. Aura-2 ships genuinely native-accent voices: Japanese prosody, Spanish regional variation, German intonation. Using an English voice mispronouncing another language defeats the entire product.

Why a per-listener TTS queue? In a room with multiple speakers, audio chunks from different people arrive at the same socket in parallel. Without serialization they interleave into noise. A per-socket promise chain fixes this, and the queue depth is capped at 1 so stale utterances get dropped rather than building an 8-second backlog.

Screenshots

Broadcast setup

Broadcast mode: pick source and target languages, hit Start, share the listener link.

Room view

Rooms: each participant sets their own language and voice. The server handles translation per-person.

VOD studio

VOD: upload a video, pick a language, get a dubbed MP4 and SRT file back.

Testing With TestSprite MCP

The project was built under hackathon pressure. Third-party APIs can fail in specific ways. Frontend validation is easy to break quietly. Writing full test coverage by hand would have eaten most of the remaining build time.

TestSprite MCP plugs into Claude Code as an MCP server. It reads the codebase, generates a test plan, and writes runnable test code. I ran it twice: once for a baseline, and again after a round of fixes.

Backend tests generated (5/5 passing):

Test What it checks
TC001 POST /api/dub with valid file returns { srt, mp3 }
TC002 POST /api/dub with missing params returns 400
TC003 POST /api/dub with broken third-party API returns 500
TC004 POST /api/mux with valid inputs returns video/mp4 stream
TC005 POST /api/mux with missing inputs returns 400

The generated code is more thorough than what you’d write in a hurry. TC001 builds a minimal valid WAV file inline, validates the base64 response actually decodes, and checks the SRT string is non-empty:

mp3_bytes = base64.b64decode(json_data["mp3"], validate=True)
assert len(mp3_bytes) > 0
assert "srt" in json_data and len(json_data["srt"].strip()) > 0

Frontend tests generated (12 cases): broadcast start and validation, room create/join/leave/rejoin, language and voice change in-session, VOD upload validation, and landing-to-mode navigation flows.

What the first run caught:

  1. /api/dub was returning a plain string in some error paths instead of a consistent JSON shape. TC003 found it.
  2. The room ID field was letting through malformed IDs before hitting the server. TC009 found it.

Fixed both, reran, all clean. The dashboard keeps a full run history so you can diff before and after. That is the actual useful part: not a single passing run, but a record of what broke, what changed, and whether the fix held.

Running It

git clone https://github.com/your-username/polydub
cd polydub
pnpm install
cp .env.example .env
# set DEEPGRAM_API_KEY and LINGO_API_KEY in .env

pnpm dev     # terminal 1: Next.js on :3000
pnpm server  # terminal 2: WebSocket server on :8080

Github: https://github.com/crypticsaiyan/PolyDub