Air Launches as Public Preview – A New Wave of Dev Tooling Built on 26 Years of Experience

Download Air – free for macOS. Windows and Linux versions coming soon.

We hold a principled optimism for agentic software development – and a pragmatic one. After 26 years of building developer tools, we have a clear view of what needs to be built and a strong conviction that agents will fundamentally change how software gets made. But new concepts are emerging faster than anyone can validate them, so we’d rather ship what works than hype what might.

The current state of working with coding agents is fragmented: Each agent runs in a separate tool, with a different setup, different context, and no structural understanding of your code. Air is an important piece in solving that puzzle, and today marks the launch of its Public Preview. It’s available to developers with a JetBrains AI subscription or those with existing subscriptions to agent providers (except Anthropic) and API keys.

A real agentic development environment, not a chat window

JetBrains Air is an agentic development environment for delegating coding tasks to multiple AI agents and running them concurrently. Like IntelliJ IDEA, an IDE, Air is built on the idea of integrating the essential tools into a single coherent experience. But there’s a key difference: IDEs add tools to the code editor, while Air builds tools around the agent. The new development experience is optimized for you to guide the agent and fine-tune its output.

Air helps you navigate your codebase. You can mention a specific line, commit, class, method, or other symbol when defining a task. As a result, the agent gets precise context instead of a blob of pasted text. And when the task is done, your review doesn’t stop at the code diff – Air lets you see the changes in the context of your entire codebase, and you’ll have essential tools like a terminal, Git client, and built-in preview right in front of you. 

Let’s be honest: Complex codebases aren’t yet ready for pure agentic coding. This is where our 26 years of experience building IDEs come into play. Air focuses on agent orchestration without replacing existing development workflows. Air handles the agent-powered development; your IDE handles the rest.

Switch agents freely, run tasks concurrently

Air supports Codex, Claude Agent, Gemini CLI, and Junie out of the box. AI vendors are leapfrogging each other – Air makes switching agents across projects a natural part of the workflow, not a migration. Air supports the Agent Client Protocol (ACP) and will soon add support for other agents available via ACP through the ACP Agent Registry.

Run agents locally by default, or isolate them in Docker containers and Git worktrees for sandboxing and concurrent work.

Air helps you to avoid the mess of having multiple windows and terminal tabs open for each task. You see one task (meaning one agent session) at a time. You’ll get a notification when another task needs your attention, so you can quickly switch to it while the agent keeps working. Air then helps bring your changes from a container or worktree to your main copy.

Getting started

If you have a subscription to JetBrains AI Pro (which is included in the All Products Pack and dotUltimate) or AI Ultimate, all agents are included – just sign in with your JetBrains Account. Prefer to use your own API keys from Anthropic, OpenAI, or Google? You can bring them along! You can also use personal-use subscriptions from Google and OpenAI. If you take the BYOK (Bring Your Own Key) approach, your own keys will always be used first, and any usage not covered by those keys will default back to your JetBrains subscription. A dedicated offering for enterprises is coming soon.

Cloud execution (i.e. remote agent runs in isolated sandboxes) is in tech preview and will be available soon for Air users. 

Next step: Team collaboration

This release focuses on individual developer productivity. At the same time, we see this as a step toward a future where humans and agents collaborate more closely.

One insight we’ve gained from working with agents is that collaboration doesn’t start when reviewing agent output. It starts earlier, when defining the task itself. Teams benefit from refining and aligning on the task together before any agents get involved at all. We’ll share more on this soon.

Download Air, sign in, and start your first task. We listen to your feedback and are constantly using it to improve – join us on X or get in touch directly.

Junie CLI, the LLM-agnostic coding agent, is now in Beta

This March, we’re taking a major step forward in the development of Junie, the coding agent by JetBrains.

Meet Junie CLI, the evolution of Junie into a fully standalone AI agent. With the upcoming release of Junie CLI, you will be able to use Junie directly from the terminal, inside any IDE, in CI/CD, and on GitHub or GitLab. Why do we call it LLM-agnostic? Junie supports all the top-performing models from OpenAI, Anthropic, Google, and Grok, and will be integrating the latest models as they are released.

Supporting all popular developer workflows, we want Junie CLI to be barrier-free from the very beginning:

  • One-click migration from other agents such as Claude Code, Codex, and others.
  • Flexible customization through guidelines, custom agents and agent skills, commands, MCP, and other agent configuration methods.
  • BYOK (Bring Your Own Key) pricing, allowing you to use your own model keys and run the agent without additional platform charges.

Note: To help you get started, we’re offering free access to Gemini 3 Flash for one week. It’s enabled by default, so you can install Junie CLI and begin using it right away at no cost. After the week, standard pricing applies. 

Get started with Junie

Bring JetBrains quality to any environment

Junie is powered by JetBrains intelligence, combining LLM capabilities with deep project context, structured understanding, and workflow awareness.

Junie demonstrates high performance across top-performing models, delivering strong benchmark results — even with fast, low-cost models like Gemini Flash 3 — while maintaining responsiveness and accuracy.

It’s designed to be:

  • LLM-agnostic and open to all high-performing models
  • Capable of solving even complex problems
  • Context-aware by default
  • Reliable and secure, supported by all required safeguards

Real-time Prompting
Work doesn’t stop while Junie runs. You can adjust instructions and add details in real time — refining outputs without restarting the process.

Codebase Intelligence

Junie isn’t just “AI in a terminal.” It’s a fully standalone agent with capabilities designed to move beyond simple prompting.

Easy MCP Configuration

Install popular MCP servers in a few clicks, with no manual configuration required. Junie can also detect when an MCP server could help with your task and recommend the most relevant option.

Next-task Prediction
By understanding the project context, Junie anticipates what you might need next. It doesn’t just react — it proactively supports your workflow, and can even remind you of things you might otherwise forget or miss.

Making the pricing model more affordable and open

We are designing the pricing in a completely new way. As usual, JetBrains AI licenses can be used to access Junie CLI. However, we believe that our first users deserve an even more transparent model.

We support BYOK (Bring Your Own Key), giving developers and teams the flexibility to choose their preferred model or easily test new ones. Whether you rely on specific providers for compliance, performance, cost management, or internal policies, Junie integrates seamlessly with your existing setup. This ensures teams can adopt Junie without compromising governance, security, or code quality.

Millions of tasks – and one agent to rule them all

We don’t work in a single environment anymore.

As a developer, you have to switch between different platforms all the time:  

  • IDEs
  • Terminals
  • Pull requests
  • CI/CD pipelines
  • Cloud platforms

Now Junie meets you where you are. By making Junie available outside JetBrains IDEs, we’re expanding from IDE-native AI to ecosystem-level AI – using one agent to connect platforms. This is a significant milestone for us and an important step toward enabling professional-level development, even outside the IDE.

How to Detect Energy Sentiment Anomalies with the Pulsebit API (Python)

How to Detect Energy Sentiment Anomalies with the Pulsebit API (Python)

We recently observed a striking anomaly in our dataset: a 24-hour momentum spike of +0.750 in the energy topic. This data point isn’t just a number; it’s a signal indicating that something unusual is happening in the energy sentiment landscape. With a sentiment score hovering around +0.000 and a confidence level of 0.87, this spike demands our attention and analysis.

The Problem

This spike reveals a structural gap in any pipeline that fails to account for multilingual origins or dominant entities. If your model isn’t able to parse sentiment across diverse languages or recognize leading entities, it could have missed this critical insight by several hours. For instance, if the dominant entity in the African energy sector were to shift sentiment rapidly, your model would lag behind, unable to adapt to the swift changes in the narrative around energy.

Arabic coverage led by 4.2 hours. English at T+4.2h. Confide
Arabic coverage led by 4.2 hours. English at T+4.2h. Confidence scores: Arabic 0.82, Mandarin 0.68, English 0.41 Source: Pulsebit /sentiment_by_lang.

The Code

Here’s the Python code that can help you catch this anomaly. We’ll set up filters and run a meta-sentiment analysis on the narrative:

import requests

![Left: Python GET /news_semantic call for 'energy'. Right: li](https://pub-c3309ec893c24fb9ae292f229e1688a6.r2.dev/figures/g3_code_output_split_1773057630455.png)
*Left: Python GET /news_semantic call for 'energy'. Right: live JSON response structure. Three lines of Python. Clean JSON. No infrastructure required. Source: Pulsebit /news_semantic.*


# Step 1: Geographic origin filter (if data was available)
# Assuming we have language/country data
geo_filter = {'region': 'africa'}  # Filter for the African region

# Simulated data for the spike
data = {
    'topic': 'energy',
    'score': +0.000,
    'confidence': 0.87,
    'momentum': +0.750,
}

# Step 2: Meta-sentiment moment
narrative = "Energy narrative sentiment cluster analysis"
meta_sentiment_url = "https://your_api_endpoint/sentiment"

response = requests.post(meta_sentiment_url, json={'text': narrative})
meta_sentiment_result = response.json()

print(f"Meta-sentiment analysis result: {meta_sentiment_result}")

In this example, we set up a geographic filter for the African region. While we didn’t have geo filter data returned in this instance, you can specify it when your dataset includes language or country data. The meta-sentiment moment is where the magic happens: running the narrative through our sentiment endpoint allows us to score how the narrative itself is framing the situation.

Geographic detection output for energy filter. No geo data l
Geographic detection output for energy filter. No geo data leads by article count. Bar colour: sentiment direction. Source: Pulsebit articles[].country.

Three Builds Tonight

Here are three specific builds we can leverage using this momentum spike:

  1. Geo-Specific Sentiment Tracker: Set a threshold for momentum spikes above +0.500 in the African region. Use the geo filter to track sentiment across multiple languages, allowing you to adapt your strategies based on localized sentiment shifts.

  2. Meta-Sentiment Loop: Implement a looping mechanism that regularly sends the latest narrative back through the sentiment endpoint. For example, if the momentum remains above +0.500 for 24 hours, send a daily update on the narrative sentiment and any changes.

  3. Alert System for Rapid Changes: Create an alert system that triggers when sentiment scores drop below +0.100, coupled with a significant momentum spike. This will help you react promptly to potential shifts in the energy market sentiment.

Get Started

To dive deeper, check out our documentation. You’ll be able to copy-paste and run this code in under 10 minutes, allowing you to harness real-time sentiment anomalies effectively.

Stop Letting Agents Push to Main

You gave Claude Code access to your repo. It wrote some code. It committed. It pushed. Straight to main.

No PR. No review. No status checks. Just raw, unreviewed AI-generated code landing directly on your production branch like it owns the place.

And GitHub let it happen, because you never told it not to.

This is the single most common mistake I see from developers using agentic coding tools. Not bad prompts. Not hallucinated dependencies. Just a completely unprotected main branch and an agent that’s happy to commit wherever you point it.

The fix takes five minutes. Let’s do it.

What actually happens when you don’t protect main

Here’s the failure mode, step by step.

  1. You ask Claude Code to refactor your auth module.
  2. It makes the changes, runs git add ., writes a commit message, and pushes to main.
  3. If you’ve got CI/CD hooked up to main (and you probably do), that code is now deploying.
  4. The refactor has a bug. Of course it does. It’s unreviewed code.
  5. Your users find out before you do.

This isn’t a hypothetical. This is what happens when the shortest path between “write code” and “deploy to production” has zero gates on it.

The five-minute fix: branch protection rules

Go to your repo on GitHub. Click Settings > Branches. Under “Branch protection rules,” click Add branch ruleset.

Give it a name in “Ruleset Name”.

Click “Add target” -> “Include by pattern”

In the “Branch name pattern” field, type main.

Now check these boxes:

Require a pull request before merging. This is the big one. It means nobody, not you, not your agent, not anyone, can push directly to main. All changes go through a PR. Period.

Require approvals. Set this to at least 1. Even if you’re a solo dev, this forces you to actually look at the diff before it merges. We’ll talk more about reviewing AI-generated code later in this series.

Require status checks to pass before merging. You might not have CI set up yet. That’s fine. Once you do, add them here. They’ll automatically become gates on every PR. Future you will thank present you.

Block force pushes. Force pushing to main should never happen. Not by you, not by your agent, not by anyone. This is non-negotiable.

That’s all of your settings.

Change “Enforcement status” to Active

Do not allow bypassing the above settings. This one matters more than people think. Without it, repo admins can skip all the rules. That includes you. The whole point of guardrails is that they work even when you’re in a hurry and “just want to push this one thing real quick.”

Click Create. You’re done.

What this changes about your workflow

Your agent can still write code. It can still commit. It just can’t land those commits on main without going through a pull request.

In practice, this means you (or your agent) work on a feature branch:

git checkout -b feat/refactor-auth
# ... do the work ...
git add .
git commit -m "refactor auth module"
git push origin feat/refactor-auth

Then you open a PR, review the diff, and merge. That’s it. One extra step that puts a human in the loop before code hits production.

If you’re using Claude Code, you can tell it to work on a branch. It’s good at following that instruction. And if it tries to push to main, GitHub will reject the push. The guardrail works even when the human forgets.

“But I’m a solo dev, I don’t need PRs”

Yes you do. Especially now.

When it was just you writing code, pushing to main was a calculated risk. You wrote it, you understood it, you shipped it. Reckless, maybe, but at least you knew what you were deploying.

That equation changes completely when an agent is generating code on your behalf. You didn’t write it line by line. You prompted it. There’s a real gap between “I asked for a thing” and “I understand every line of what was produced.” The PR is where you close that gap.

Even if the review is just you reading the diff for two minutes, that’s two minutes of catching the bug that would have cost you two hours in production.

What this doesn’t cover

Branch protection is one layer. Important, but not sufficient on its own. You still need:

  • Secret scanning so your agent doesn’t accidentally commit your .env file (next article in the series).
  • CI checks so the code that lands on main actually passes linting and tests (coming soon).
  • A review process that’s calibrated for AI-generated code (also coming).

This is the foundation. Everything else stacks on top of it.

Go do it now

Seriously. Open your repos, the ones you’re using with Claude Code or Copilot or Cursor or any agentic tool. Check if main is protected. If it’s not, fix it. Five minutes.

The best guardrail is the one that was already in place before something went wrong.

This is Part 1 of the Guardrails series on safe development with AI coding agents. Next up: Secrets, Agents, and .env Files.

Why I Moved My Uptime Monitoring Into the Terminal

Every monitoring tool I’ve used makes me leave my terminal to click through a web dashboard.

I spend my day in VS Code and deploy with git push, but to add a health check I need to open a browser, log in, click “New Monitor”, fill out a form, and click save. Then do it again for the next endpoint. Configuration lives in someone else’s database, not my repo.

This felt wrong.

The Problem: Monitoring UX is Stuck in 2010

Most monitoring tools are dashboard-first. The workflow looks like:

  1. Open browser
  2. Log in to monitoring service
  3. Click “New Monitor”
  4. Fill out form: name, URL, interval, expected status, alert channel
  5. Save
  6. Repeat for each endpoint

The result:

  • Configuration drift — Dashboard state diverges from what’s in version control. Nobody reviews monitor changes.
  • No audit trail — When did someone change the interval from 30s to 5m? Who removed the Slack alert? Good luck finding out.
  • Context switching — You’re in a terminal deploying a new service, and now you have to context-switch to a browser to add monitoring.
  • No code review — Teammates can’t review monitor changes in a PR. There’s no git diff for dashboard clicks.

Tools like UptimeRobot and BetterStack are excellent at what they do. But if your deployment workflow is git push → CI → production, having monitoring configured through a web form is the odd one out.

What I Actually Wanted

I wanted monitoring that fits the way I already work:

# monitors.yaml — checked into the repo
version: 1
monitors:
  - name: production-api
    url: https://api.example.com/health
    interval: 60
    expect:
      status: 200
      contains: "ok"
    alerts:
      slack: "#oncall"

  - name: website
    url: https://example.com
    interval: 60
    expect:
      status: 200

Deploy it the same way I deploy everything else:

# .github/workflows/deploy-monitors.yml
name: Deploy Monitors
on:
  push:
    paths: [monitors.yaml]
jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: curl -fsSL termwatch.dev/install.sh | sh
      - run: termwatch validate
      - run: termwatch deploy
    env:
      TERMWATCH_API_KEY: ${{ secrets.TERMWATCH_API_KEY }}

Check status without opening a browser:

$ termwatch status
NAME            STATUS    RESP     CHECKED
api-health      ✓ UP      124ms    32s ago
web-app         ✓ UP       89ms    32s ago
payment-svc     ✗ DOWN     -       1m ago
postgres        ✓ UP       12ms    32s ago

Monitor changes show up in pull request diffs. The configuration is version-controlled. The deployment is automated. Everything is a text file or a terminal command.

The YAML-First Approach

The core idea is simple: monitors are code. They live in your repo, they’re reviewed in PRs, and they’re deployed through CI.

This gives you some things that dashboard-configured monitoring can’t:

1. Review before deploy. Someone changes interval: 30 to interval: 300? That shows up in a diff. A teammate will ask why.

2. Rollback is git revert. Accidentally deleted a monitor? git revert brings it back. Your monitoring config has the same guarantees as your application code.

3. Environment parity. If you have monitors-staging.yaml and monitors-production.yaml, you can keep them in sync with the same tooling you use for application config.

4. Onboarding. New team member looks at monitors.yaml and immediately understands what’s being monitored, at what interval, and what alerts exist. No dashboard walkthrough needed.

Honest Tradeoffs

This approach is not for everyone, and there are real downsides:

  • No visual dashboards. If your team prefers charts and graphs over terminal tables, tools like BetterStack or Grafana are a better fit. A web dashboard exists for viewing results, but configuration is CLI-only.

  • Learning a schema. You need to learn the YAML format. It’s simple, but it is one more thing to learn.

  • Single check region. Tools like UptimeRobot check from multiple geographic locations to reduce false positives. Single-region checks are more prone to network-level false alarms.

  • Fewer integrations. Established tools have 50+ notification channels. Starting with Slack, Discord, and email covers most cases but not everyone.

  • Newer and less proven. A tool with tens of thousands of users has battle-testing that a new tool simply hasn’t had yet.

For developers who already live in the terminal and manage infrastructure with code, the tradeoffs are worth it. For teams that prefer GUIs or need enterprise features, UptimeRobot and BetterStack are great products.

What I Built

I built TermWatch to scratch this itch. It’s a CLI that reads monitors.yaml, deploys monitor configuration to a hosted checking service, and sends alerts when things go down.

It’s available via dotnet tool install -g termwatch (NuGet) or as standalone binaries with SHA256 checksum verification.

The free tier gives you 5 monitors with 5-minute check intervals — enough to monitor a real side project stack.

I Want to Hear From You

If you’ve solved the “monitoring from the terminal” problem differently, I’d genuinely like to hear how. Do you use Prometheus + Alertmanager? Checkly? A custom script? What works and what doesn’t?

Drop a comment or find me on GitHub — I’m still figuring out the right balance between simplicity and features, and real-world feedback is the most valuable thing right now.

5 CI/CD Pipeline Disasters I Caused (And How I Fixed Them)

A confession from a Senior DevOps Engineer who has broken production more times than he’d like to admit

Nobody writes post-mortems about CI/CD pipelines.

When a database goes down, there’s a war room, a Slack channel, and a 12-page incident report. When a CI/CD pipeline silently deploys a broken build to production at 3 AM on a Saturday — nobody writes about that.

I’m going to write about that. Because in five years of building deployment pipelines for enterprise teams, I’ve caused more production incidents through pipeline misconfigurations than through any other single category of mistake.

These are my five worst pipeline disasters, exactly how they happened, and the engineering changes we made so they could never happen again.

Disaster #1: The Pipeline That Deployed on Every Commit to Main

The setup: We had just migrated from Jenkins to Azure DevOps. The new pipeline was beautiful — multi-stage, with build, test, and deploy stages. I was proud of it.

The trigger configuration:

trigger:
  branches:
    include:
      - main

Simple. Clean. Every commit to main triggers the pipeline. Ship fast, right?

What happened: A developer merged a PR at 4:47 PM on a Friday. The PR had passed code review. It had passed unit tests. It looked fine.

But the developer had also updated a README.md file in the same PR. The pipeline triggered. It built, tested, and deployed — all automatically. No approval gate. No human checkpoint. Straight to production.

The code change was fine. But the deployment happened during a database migration that was running in production. The new code expected a column that didn’t exist yet. Instant 500 errors across the entire API.

The fix:

# Never auto-deploy to production. Never.
stages:
  - stage: Build
    # Auto-trigger on main ✅

  - stage: DeployStaging
    dependsOn: Build
    # Auto-deploy to staging ✅

  - stage: DeployProduction
    dependsOn: DeployStaging
    condition: succeeded()
    jobs:
      - deployment: Production
        environment: 'production'  # ← Manual approval gate
        strategy:
          runOnce:
            deploy:
              steps:
                - script: echo "Deploying to production"

The rule established: Build and deploy to staging automatically. Production deployments always require manual approval. Always. No exceptions. Not for hotfixes. Not for “just a config change.” Not for anything.

Environment approval configuration:

  1. Azure DevOps → Environments → Production → Approvals and checks
  2. Add approval gate with 2 required approvers
  3. Add business hours check (no deploys Friday 4 PM to Monday 8 AM)
  4. Add branch control (only main branch)

Time to detect: 23 minutes
Time to resolve: 4 minutes (rollback)
Users affected: ~2,000

Disaster #2: The Secret That Wasn’t Secret

The setup: We needed to pass a database connection string to our deployment pipeline. I did what any reasonable engineer would do: I stored it as a pipeline variable.

What I did wrong: I forgot to check the “Keep this value secret” checkbox.

Which means the connection string — containing the database hostname, port, username, and password — was visible in plain text in the pipeline logs. For every single run. For three months.

# What appeared in every pipeline log for 90 days:
Step 4/12: Setting environment variables
  DATABASE_URL=postgresql://admin:P@ssw0rd2024!@prod-db.postgres.database.azure.com:5432/maindb
  REDIS_URL=redis://:RedisP@ss!@prod-redis.redis.cache.windows.net:6380

Anyone with read access to the pipeline — which was the entire engineering team of 40 people — could see production database credentials in every build log.

How we found it: A new hire asked, “Hey, should this password be visible in the logs?” Bless that new hire.

The fix (multi-layered):

# Layer 1: Azure DevOps secret variables (masked in logs)
variables:
  - group: production-secrets  # Linked to Azure Key Vault

# Layer 2: Pipeline step to verify no secrets in output
- script: |
    # Scan pipeline logs for potential secrets
    if grep -rE "(password|secret|key|token|connectionstring)=" $(Pipeline.Workspace)/logs/; then
      echo "##vso[task.logissue type=error]Potential secret found in logs!"
      exit 1
    fi
  displayName: 'Secret Scan'

# Layer 3: Azure Key Vault integration (source of truth)
- task: AzureKeyVault@2
  inputs:
    azureSubscription: 'production-connection'
    KeyVaultName: 'kv-prod-app'
    SecretsFilter: 'database-url,redis-url,api-key'

The rule established:

  1. No secrets in pipeline variables. All secrets come from Azure Key Vault.
  2. Secret scanning in every pipeline run using GitLeaks or similar.
  3. Quarterly audit of all pipeline variables across all projects.
  4. Rotate all credentials immediately if exposure is discovered.

We rotated every credential that had ever appeared in those logs. Every. Single. One.

Time to detect: 90 days (that’s the scary part)
Credentials rotated: 14
Sleep lost: Significant

Disaster #3: The Cache That Poisoned Every Build

The setup: Our Node.js builds were taking 12 minutes because npm install downloaded 1,200 packages every time. So I added caching:

- task: Cache@2
  inputs:
    key: 'npm | "$(Agent.OS)" | package-lock.json'
    path: $(npm_config_cache)
    restoreKeys: |
      npm | "$(Agent.OS)"
  displayName: 'Cache npm packages'

Build time dropped from 12 minutes to 3 minutes. I was a hero.

For exactly two weeks.

What happened: A developer updated a package to fix a critical security vulnerability. They ran npm update, updated package.json, but the package-lock.json didn’t properly reflect the transitive dependency change.

The cache key was based on package-lock.json. The lock file barely changed. So the cache hit. And the old, vulnerable version of the package was restored from cache instead of the new, patched version.

The “fixed” version deployed to production with the unfixed vulnerability. For nine days.

How we found it: A security scan in a different pipeline caught the vulnerable dependency. But only because that pipeline didn’t have caching enabled.

The fix:

# Better cache strategy with integrity verification
- task: Cache@2
  inputs:
    key: 'npm | "$(Agent.OS)" | package-lock.json | $(Build.SourceVersion)'
    path: $(npm_config_cache)
    restoreKeys: |
      npm | "$(Agent.OS)" | package-lock.json
  displayName: 'Cache npm packages'

# ALWAYS verify after cache restore
- script: |
    npm ci  # Clean install — ignores cache if lock file changed
    npm audit --audit-level=high  # Fail if high/critical vulns
  displayName: 'Install & Audit'

The key change: using npm ci instead of npm install. npm ci deletes node_modules and installs fresh from the lock file — it uses the download cache but doesn’t trust stale installed packages.

The rule established:

  1. Cache downloads, not installations. Cache the npm/pip download cache, not the node_modules or .venv directory itself.
  2. Always run security audit after dependency installation, regardless of cache.
  3. Include the commit SHA in the cache key for critical pipelines.

Disaster #4: The Infinite Loop Deploy

The setup: We had a GitOps pipeline where pushing a new image tag to the config repository triggered a deployment. ArgoCD would detect the change and sync.

We also had a pipeline that ran after deployment to update a status badge in the same config repository.

You see where this is going.

1. Developer pushes code
2. CI pipeline builds image, pushes to registry
3. CI pipeline updates image tag in config repo  ← triggers ArgoCD
4. ArgoCD deploys new version
5. Post-deploy pipeline updates status badge in config repo ← triggers step 3
6. Step 3 triggers ArgoCD again...
7. ArgoCD deploys the same version...
8. Post-deploy pipeline updates status badge...
9. GOTO 5

Infinite deployment loop.

It ran 47 times in 3 hours before someone noticed. Forty-seven deployments of the same version. Each one triggering pod restarts, health check failures, and brief service disruptions.

The monitoring dashboard looked like a heart rate monitor during a panic attack.

The fix:

# In the badge-update pipeline:
- script: |
    # Check if the image tag actually changed
    CURRENT_TAG=$(grep 'image:' k8s/deployment.yaml | awk -F: '{print $NF}')
    if [ "$CURRENT_TAG" == "$NEW_TAG" ]; then
      echo "Image tag unchanged. Skipping commit."
      exit 0
    fi

    # Use [skip ci] in commit message
    git commit -m "chore: update status badge [skip ci]"
    git push
  displayName: 'Update badge (skip CI trigger)'

The rules established:

  1. Any commit to the config repo that isn’t a deployment must include [skip ci] in the commit message.
  2. Idempotency checks before every git push in pipelines — don’t push if nothing meaningful changed.
  3. Circuit breaker: If the same pipeline runs more than 3 times in 10 minutes, automatically block further runs and alert the team.

Disaster #5: The “Works on My Machine” Dockerfile

The setup: A developer built a Docker image locally on their M1 MacBook. The image worked perfectly in local testing. They pushed it to our Azure Container Registry.

The CI pipeline didn’t build the image — it pulled the developer’s pre-built image and deployed it.

What happened: The developer’s MacBook builds arm64 images. Our AKS nodes run amd64.

Error: exec format error

Every single pod crashed. CrashLoopBackOff across 12 services. On a Monday morning.

The irony: the error message “exec format error” is so vague that we spent 20 minutes thinking it was a binary corruption issue before someone realized the architecture mismatch.

The fix:

# ALWAYS build in the pipeline. Never use pre-built images.
- script: |
    docker buildx build 
      --platform linux/amd64 
      --tag $(ACR_NAME).azurecr.io/$(IMAGE_NAME):$(Build.BuildId) 
      --push 
      .
  displayName: 'Build & Push (amd64 only)'

And a policy enforcement using OPA/Gatekeeper:

# Reject any image not built by our CI pipeline
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAllowedRepos
metadata:
  name: require-acr-images
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
    namespaces: ["production"]
  parameters:
    repos:
      - "prodacr.azurecr.io/"  # Only allow images from our ACR
# Image provenance - verify build pipeline metadata
# We tag every CI-built image with build metadata
- script: |
    docker buildx build 
      --label "built-by=azure-devops" 
      --label "pipeline=$(Build.DefinitionName)" 
      --label "build-id=$(Build.BuildId)" 
      --label "commit=$(Build.SourceVersion)" 
      --platform linux/amd64 
      --tag $(ACR_NAME).azurecr.io/$(IMAGE_NAME):$(Build.BuildId) 
      --push 
      .

The rules established:

  1. Never deploy locally-built images. All production images must be built by the CI pipeline.
  2. Always specify --platform linux/amd64 in Docker builds.
  3. Label every image with build metadata for traceability.
  4. OPA policy rejecting images not from approved registries.

What I Learned

Five disasters. Five 2-AM wake-up calls. Five post-mortems that started with “how did we let this happen?”

Here’s what I took away:

1. Pipelines are infrastructure. They deserve the same rigor as your Kubernetes manifests, Terraform modules, and application code. Code review them. Test them. Version them.

2. Every pipeline needs a circuit breaker. A mechanism that says “something is wrong, stop deploying” before a human notices. Whether it’s deployment frequency limits, health check gates, or automatic rollback triggers.

3. Secrets in pipelines are the #1 security risk most teams ignore. Not because they don’t care, but because it’s invisible. You don’t see the password in the logs until someone goes looking.

4. Caches are assumptions. And assumptions expire. Always verify after a cache restore. Trust, but verify. Actually — don’t trust. Just verify.

5. The scariest incidents aren’t the ones that cause an outage. They’re the ones you don’t notice for 90 days. A secret exposed in logs for three months. A vulnerable dependency cached for nine days. A deployment loop running 47 times. Silent failures are worse than loud ones.

If you’re building CI/CD pipelines, learn from my mistakes. If you’ve already made these mistakes, welcome to the club.

We don’t have t-shirts, but we have excellent post-mortems.

If you’ve survived your own pipeline disaster, I want to hear about it. Drop a comment — anonymized, of course. Your secrets are safe with me (unlike my pipeline variables).

Follow for more real-world DevOps war stories. I write about the things we break so you don’t have to.

The Quiet Exodus

The summer of 2025 brought an unlikely alliance to Washington. Senators from opposite sides of the aisle stood together to introduce legislation forcing American companies to disclose when they’re replacing human customer service agents with artificial intelligence or shipping those jobs overseas. The Keep Call Centers in America Act represents more than political theatre. It signals a fundamental shift in how governments perceive the relationship between automation, labour markets, and national economic security.

For Canada, the implications are sobering. The same AI technologies promising productivity gains are simultaneously enabling economic reshoring that threatens to pull high-value service work back to the United States whilst leaving Canadian workers scrambling for positions that may no longer exist. This isn’t a distant possibility. It’s happening now, measurable in job postings, employment data, and the lived experiences of early-career workers already facing what Stanford researchers call a “significant and disproportionate impact” from generative AI.

The question facing Canadian policymakers is no longer whether AI will reshape service economies, but how quickly, how severely, and what Canada can do to prevent becoming collateral damage in America’s automation-driven industrial strategy.

Manufacturing’s Dress Rehearsal

To understand where service jobs are heading, look first at manufacturing. The Reshoring Initiative’s 2024 annual report documented 244,000 U.S. manufacturing jobs announced through reshoring and foreign direct investment, continuing a trend that has brought over 2 million jobs back to American soil since 2010. Notably, 88% of these 2024 positions were in high or medium-high tech sectors, rising to 90% in early 2025.

The drivers are familiar: geopolitical tensions, supply chain disruptions, proximity to customers. But there’s a new element. According to research cited by Deloitte, AI and machine learning are projected to contribute to a 37% increase in labour productivity by 2025. When Boston Consulting Group estimated that reshoring would add 10-30% in costs versus offshoring, they found that automating tasks with digital workers could offset these expenses by lowering overall labour costs.

Here’s the pattern: AI doesn’t just enable reshoring by replacing expensive domestic labour. It makes reshoring economically viable by replacing cheap foreign labour too. The same technology threatening Canadian service workers is simultaneously making it affordable for American companies to bring work home from India, the Philippines, and Canada.

The specifics are instructive. A mid-sized electronics manufacturer that reshored from Vietnam to Ohio in 2024 cut production costs by 15% within a year. Semiconductor investments created over 17,600 new jobs through mega-deals involving TSMC, Samsung, and ASML. Nvidia opened AI supercomputer facilities in Arizona and Texas in 2025, tapping local engineering talent to accelerate next-generation chip design.

Yet these successes mask deeper contradictions. More than 600,000 U.S. manufacturing jobs remain unfilled as of early 2025, even as retirements accelerate. According to the Manufacturing Institute, five out of ten open positions for skilled workers remain unoccupied due to the skills gap crisis. The solution isn’t hiring more workers. It’s deploying AI to do more with fewer people, a dynamic that manufacturing pioneered and service sectors are now replicating at scale.

Texas, South Carolina, and Mississippi emerged as top 2025 states for reshoring and foreign direct investment. Access to reliable energy and workforce availability now drives site selection, elevating regions like Phoenix, Dallas-Fort Worth, and Salt Lake City. Meanwhile, tariffs have become a key motivator, cited in 454% more reshoring cases in 2025 versus 2024, whilst government incentives were cited 49% less as previous subsidies phase out.

The manufacturing reshoring story reveals proximity matters, but automation matters more. When companies can manufacture closer to American customers using fewer workers than foreign operations required, the economic logic of Canadian manufacturing operations deteriorates rapidly.

The Contact Centre Transformation

The contact centre industry offers the clearest view of this shift. In August 2022, Gartner predicted that conversational AI would reduce contact centre agent labour costs by $80 billion by 2026. Today, that looks conservative. The average cost per live service interaction ranges from $8 to $15. AI-powered resolutions cost $1 or less per interaction, a 5x to 15x cost reduction at scale.

The voice AI market has exploded faster than anticipated, projected to grow from $3.14 billion in 2024 to $47.5 billion by 2034. Companies report containing up to 70% of calls without human interaction, saving an estimated $5.50 per contained call.

Modern voice AI agents merge speech recognition, natural language processing, and machine learning to automate complex interactions. They interpret intent and context, handle complex multi-turn conversations, and continuously improve responses by analysing past interactions.

By 2027, Gartner predicts that 70% of customer interactions will involve voice AI. The technology handles fully automated call operations with natural-sounding conversations. Some platforms operate across more than 30 languages and scale across thousands of simultaneous conversations. Advanced systems provide real-time sentiment analysis and adjust responses to emotional tone. Intent recognition allows these agents to understand a speaker’s goal even when poorly articulated.

AI assistants that summarise and transcribe calls save at least 20% of agents’ time. Intelligent routing systems match customers with the best-suited available agent. Rather than waiting on hold, customers receive instant answers from AI agents that resolve 80% of inquiries independently.

For Canada’s contact centre workforce, these numbers translate to existential threat. The Bureau of Labor Statistics projects a loss of 150,000 U.S. call centre jobs by 2033. Canadian operations face even steeper pressure. When American companies can deploy AI to handle customer interactions at a fraction of the cost of nearshore Canadian labour, the economic logic of maintaining operations across the border evaporates.

The Keep Call Centers in America Act attempts to slow this shift through requirements that companies disclose call centre locations and AI usage, with mandates to transfer to U.S.-based human agents on customer request. Companies relocating centres overseas face notification requirements 120 days in advance, public listing for up to five years, and ineligibility for federal contracts. Civil penalties can reach $10,000 per day for noncompliance.

Whether this legislation passes is almost beside the point. The fact that it exists, with bipartisan support, reveals how seriously American policymakers take the combination of offshoring and AI as threats to domestic employment. Canada has no equivalent framework, no similar protections, and no comparable political momentum to create them.

The emerging model isn’t complete automation but human-AI collaboration. AI handles routine tasks and initial triage whilst human agents focus on complex cases requiring empathy, judgement, or escalated authority. This sounds promising until you examine the mathematics. If AI handles 80% of interactions, organisations need perhaps 20% of their previous workforce. Even assuming some growth in total interaction volume, the net employment impact remains sharply negative.

The Entry-Level Employment Collapse

Whilst contact centres represent the most visible transformation, the deeper structural damage is occurring amongst early-career workers across multiple sectors. Research from Stanford economists Erik Brynjolfsson, Bharat Chandar, and Ruyu Chen, drawing on ADP’s 25 million worker database, found that early-career employees in fields most exposed to AI have experienced a 13% drop in employment since 2022 compared to more experienced workers in the same fields.

Employment for 22- to 25-year-olds in jobs with high AI exposure fell 6% between late 2022 and July 2025, whilst employment amongst workers 30 and older grew between 6% and 13%. The pattern holds across software engineering, marketing, customer service, and knowledge work occupations where generative AI overlaps heavily with skills gained through formal education.

Brynjolfsson explained to CBS MoneyWatch: “That’s the kind of book learning that a lot of people get at universities before they enter the job market, so there is a lot of overlap between these LLMs and the knowledge young people have.” Older professionals remain insulated by tacit knowledge and soft skills acquired through experience.

Venture capital firm SignalFire quantified this in their 2025 State of Talent Report, analysing data from 80 million companies and 600 million LinkedIn employees. They found a 50% decline in new role starts by people with less than one year of post-graduate work experience between 2019 and 2024. The decline was consistent across sales, marketing, engineering, recruiting, operations, design, finance, and legal functions.

At Big Tech companies, new graduates now account for just 7% of hires, down 25% from 2023 and over 50% from pre-pandemic 2019 levels. The share of new graduates landing roles at the Magnificent Seven (Alphabet, Amazon, Apple, Meta, Microsoft, NVIDIA, and Tesla) has dropped by more than half since 2022. Meanwhile, these companies increased hiring by 27% for professionals with two to five years of experience.

The sector-specific data reveals where displacement cuts deepest. In technology, 92% of IT jobs face transformation from AI, hitting mid-level (40%) and entry-level (37%) positions hardest. Unemployment amongst 20- to 30-year-olds in tech-exposed occupations has risen by 3 percentage points since early 2025. Customer service projects 80% automation by 2025, displacing 2.24 million out of 2.8 million U.S. jobs. Retail faces 65% automation risk, concentrated amongst cashiers and floor staff. Data entry and administrative roles could see AI eliminate 7.5 million positions by 2027, with manual data entry clerks facing 95% automation risk.

Financial services research from Bloomberg reveals that AI could replace 53% of market research analyst tasks and 67% of sales representative tasks, whilst managerial roles face only 9% to 21% automation risk. The pattern repeats across sectors: entry-level analytical, research, and customer-facing work faces the highest displacement risk, whilst senior positions requiring judgement, relationship management, and strategic thinking remain more insulated.

For Canada, the implications are acute. Canadian universities produce substantial numbers of graduates in precisely the fields seeing the steepest early-career employment declines. These graduates traditionally competed for positions at U.S. tech companies or joined Canadian offices of multinationals. As those entry points close, they either compete for increasingly scarce Canadian opportunities or leave the field entirely, representing a massive waste of educational investment.

Research firm Revelio Labs documented that postings for entry-level jobs in the U.S. overall have declined about 35% since January 2023, with AI playing a significant role. Entry-level job postings, particularly in corporate roles, have dropped 15% year over year, whilst the number of employers referencing “AI” in job descriptions has surged by 400% over the past two years. This isn’t simply companies being selective. It’s a fundamental restructuring of career pathways, with AI eliminating the bottom rungs of the ladder workers traditionally used to gain experience and progress to senior roles.

The response amongst some young workers suggests recognition of this reality. In 2025, 40% of young university graduates are choosing careers in plumbing, construction, and electrical work, trades that cannot be automated, representing a dramatic shift from pre-pandemic career preferences.

The Canadian Response

Against this backdrop, Canadian policy responses appear inadequate. Budget 2024 allocated $2.4 billion to support AI in Canada, a figure that sounds impressive until you examine the details. Of that total, just $50 million over four years went to skills training for workers in sectors disrupted by AI through the Sectoral Workforce Solutions Program. That’s 2% of the envelope, divided across millions of workers facing potential displacement.

The federal government’s Canadian Sovereign AI Compute Strategy, announced in December 2024, directs up to $2 billion toward building domestic AI infrastructure. These investments address Canada’s competitive position in developing AI technology. As of November 2023, Canada’s AI compute capacity represented just 0.7% of global capacity, half that of the United Kingdom, the next lowest G7 nation.

But developing AI and managing AI’s labour market impacts are different challenges. The $50 million for workforce retraining is spread thin across affected sectors and communities. There’s no coordinated strategy for measuring AI’s employment effects, no systematic tracking of which occupations face the highest displacement risk, and no enforcement mechanisms ensuring companies benefiting from AI subsidies maintain employment levels.

Valerio De Stefano, Canada research chair in innovation law and society at York University, argued that “jobs may be reduced to an extent that reskilling may be insufficient,” suggesting the government should consider “forms of unconditional income support such as basic income.” The federal response has been silence.

Provincial efforts show more variation but similar limitations. Ontario invested an additional $100 million in 2024-25 through the Skills Development Fund Training Stream. Ontario’s Bill 194, passed in 2024, focuses on strengthening cybersecurity and establishing accountability, disclosure, and oversight obligations for AI use across the public sector. Bill 149, the Working for Workers Four Act, received Royal Assent on 21 March 2024, requiring employers to disclose in job postings whether they’re using AI in the hiring process, effective 1 January 2026.

Quebec’s approach emphasises both innovation commercialisation through tax incentives and privacy protection through Law 25, major privacy reform that includes requirements for transparency and safeguards around automated decision-making, making it one of the first provincial frameworks to directly address AI implications. British Columbia has released its own framework and principles to guide AI use.

None of these initiatives addresses the core problem: when AI makes it economically rational for companies to consolidate operations in the United States or eliminate positions entirely, retraining workers for jobs that no longer exist becomes futile. Due to Canada’s federal style of government with constitutional divisions of legislative powers, AI policy remains decentralised and fragmented across different levels and jurisdictions. The failure of the Artificial Intelligence and Data Act (AIDA) to pass into law before the 2025 election has left Canada with a significant regulatory gap precisely when comprehensive frameworks are most needed.

Measurement as Policy Failure

The most striking aspect of Canada’s response is the absence of robust measurement frameworks. Statistics Canada provides experimental estimates of AI occupational exposure, finding that in May 2021, 31% of employees aged 18 to 64 were in jobs highly exposed to AI and relatively less complementary with it, whilst 29% were in jobs highly exposed and highly complementary. The remaining 40% were in jobs not highly exposed.

These estimates measure potential exposure, not actual impact. A job may be technically automatable without being automated. As Statistics Canada acknowledges, “Exposure to AI does not necessarily imply a risk of job loss. At the very least, it could imply some degree of job transformation.” This framing is methodologically appropriate but strategically useless. Policymakers need to know which jobs are being affected, at what rate, in which sectors, and with what consequences.

What’s missing is real-time tracking of AI adoption rates by industry, firm size, and region, correlated with indicators of productivity and employment. In 2024, only approximately 6% of Canadian businesses were using AI to produce goods or services, according to Statistics Canada. This low adoption rate might seem reassuring, but it actually makes the measurement problem more urgent. Early adopters are establishing patterns that laggards will copy. By the time AI adoption reaches critical mass, the window for proactive policy intervention will have closed.

Job posting trends offer another measurement approach. In Canada, postings for AI-competing jobs dropped by 18.6% in 2023, followed by an 11.4% drop in 2024. AI-augmenting roles saw smaller declines of 9.9% in 2023 and 7.2% in 2024. These figures suggest displacement is already underway, concentrated in roles most vulnerable to full automation.

Statistics Canada’s findings reveal that 83% to 90% of workers with a bachelor’s degree or higher held jobs highly exposed to AI-related job transformation in May 2021, compared with 38% of workers with a high school diploma or less. This inverts conventional wisdom about technological displacement. Unlike previous automation waves that primarily affected lower-educated workers, AI poses greatest risks to knowledge workers with formal educational credentials, precisely the population Canadian universities are designed to serve.

Policy Levers and Their Limitations

Within current political and fiscal constraints, what policy levers could Canadian governments deploy to retain and create added-value service roles?

Tax incentives represent the most politically palatable option, though their effectiveness is questionable. Budget 2024 proposed a new Canadian Entrepreneurs’ Incentive, reducing the capital gains inclusion rate to 33.3% on a lifetime maximum of $2 million CAD in eligible capital gains. The budget simultaneously increased the capital gains inclusion rate from 50% to 66% for businesses effective June 25, 2024, creating significant debate within the technology industry.

The Scientific Research and Experimental Development (SR&ED) tax incentive programme, which provided $3.9 billion in tax credits against $13.7 billion of claimed expenditures in 2021, underwent consultation in early 2024. But tax incentives face an inherent limitation: they reward activity that would often occur anyway, providing windfall benefits whilst generating uncertain employment effects.

Procurement rules offer more direct leverage. The federal government’s creation of an Office of Digital Transformation aims to scale technology solutions whilst eliminating redundant procurement rules. The Canadian Chamber of Commerce called for participation targets for small and medium-sized businesses. However, federal IT procurement has long struggled with misaligned incentives and internal processes.

The more aggressive option would be domestic content requirements for government contracts. The Keep Call Centers in America Act essentially does this for U.S. federal contracts. Canada could adopt similar provisions, requiring that customer service, IT support, data analysis, and other service functions for government contracts employ Canadian workers.

Such requirements face immediate challenges. They risk retaliation under trade agreements, particularly the Canada-United States-Mexico Agreement. They may increase costs without commensurate benefits. Yet the alternative, allowing AI-driven reshoring to hollow out Canada’s service economy whilst maintaining rhetorical commitment to free trade principles, is not obviously superior.

Retraining programmes represent the policy option with broadest political support and weakest evidentiary basis. The premise is that workers displaced from AI-exposed occupations can acquire skills for AI-complementary or AI-insulated roles. This premise faces several problems. First, it assumes sufficient demand exists for the occupations workers are being trained toward. If AI eliminates more positions than it creates or complements, retraining simply reshuffles workers into a shrinking pool. Second, it assumes workers can successfully transition between occupational categories, despite research showing that mid-career transitions often result in significant wage losses.

Research from the Institute for Research on Public Policy found that generative AI is more likely to transform work composition within occupations rather than eliminate entire job categories. Most occupations will evolve rather than disappear, with workers needing to adapt to changing task compositions. This suggests workers must continuously adapt as AI assumes more routine tasks, requiring ongoing learning rather than one-time retraining.

Recent Canadian government AI consultations highlight the skills gap in AI knowledge and the lack of readiness amongst workers to engage with AI tools effectively. Given that 57.4% of workers are in roles highly susceptible to AI-driven disruption in 2024, this technological transformation is already underway, yet most workers lack the frameworks to understand how their roles will evolve or what capabilities they need to develop.

Creating Added-Value Roles

Beyond retention, Canadian governments face the challenge of creating added-value roles that justify higher wages than comparable U.S. positions and resist automation pressures. The 2024 federal budget’s AI investments totalling $2.4 billion reflect a bet that Canada can compete in developing AI technology even as it struggles to manage AI’s labour market effects.

Canada was the first country to introduce a national AI strategy and has invested over $2 billion since 2017 to support AI and digital research and innovation. The country was recently ranked number 1 amongst 80 countries (tied with South Korea and Japan) in the Center for AI and Digital Policy’s 2024 global report on Artificial Intelligence and Democratic Values.

These achievements have not translated to commercial success or job creation at scale. Canadian AI companies frequently relocate to the United States once they reach growth stage, attracted by larger markets, deeper venture capital pools, and more favourable regulatory environments.

Creating added-value roles requires not just research excellence but commercial ecosystems capable of capturing value from that research. On each dimension, Canada faces structural disadvantages. Venture capital investment per capita lags the United States significantly. Toronto Stock Exchange listings struggle to achieve valuations comparable to NASDAQ equivalents. Procurement systems remain biased toward incumbent suppliers, often foreign multinationals.

The Artificial Intelligence and Data Act (AIDA), introduced as part of Bill C-27 in June 2022, was designed to promote responsible AI development in Canada’s private sector. The legislation has been delayed indefinitely pending an election, leaving Canada without comprehensive AI-specific regulation as adoption accelerates.

Added-value roles in the AI era are likely to cluster around several categories: roles requiring deep contextual knowledge and relationship-building that AI struggles to replicate; roles involving creative problem-solving and judgement under uncertainty; roles focused on AI governance, ethics, and compliance; and roles in sectors where human interaction is legally required or culturally preferred.

Canadian competitive advantages in healthcare, natural resources, financial services, and creative industries could theoretically anchor added-value roles in these categories. Healthcare offers particular promise. Teaching hospitals employ residents and interns despite their limited productivity, understanding that medical expertise requires supervised practice. AI will transform clinical documentation, diagnostic imaging interpretation, and treatment protocol selection, but the judgement-intensive aspects of patient care, in complex cases remain difficult to automate fully.

Natural resources, mining and forestry combine physical environments where automation faces practical limits with analytical challenges where AI excels at pattern recognition in geological or environmental data. Financial services increasingly deploy AI for routine analysis and risk assessment, but relationship management with high-net-worth clients and structured financing for complex transactions require human judgement and trust-building.

Creative industries present paradoxes. AI generates images, writes copy, and composes music, seemingly threatening creative workers most directly. Yet the cultural and economic value of creative work often derives from human authorship and unique perspective. Canadian film, television, music, and publishing industries could potentially resist commodification by emphasising distinctly Canadian voices and stories that AI-generated content struggles to replicate.

These opportunities exist but won’t materialise automatically. They require active industrial policy, targeted educational investments, and willingness to accept that some sectors will shrink whilst others grow. Canada’s historical reluctance to pursue aggressive industrial policy, combined with provincial jurisdiction over education and workforce development, makes coordinated national strategies politically difficult to implement.

Preparing for Entry-Level Displacement

The question of how labour markets should measure and prepare for entry-level displacement requires confronting uncomfortable truths about career progression and intergenerational equity.

The traditional model assumed entry-level positions served essential functions. They allowed workers to develop professional norms, build tacit knowledge, establish networks, and demonstrate capability before advancing to positions with greater responsibility.

AI is systematically destroying this model. When systems can perform entry-level analysis, customer service, coding, research, and administrative tasks as well as or better than recent graduates, the economic logic for hiring those graduates evaporates. Companies can hire experienced workers who already possess tacit knowledge and professional networks, augmenting their productivity with AI tools.

McKinsey research estimated that without generative AI, automation could take over tasks accounting for 21.5% of hours worked in the U.S. economy by 2030. With generative AI, that share jumped to 29.5%. Current generative AI and other technologies have potential to automate work activities that absorb 60% to 70% of employees’ time today. The economic value unlocked could reach $2.9 trillion in the United States by 2030 according to McKinsey’s midpoint adoption scenario.

Up to 12 million occupational transitions may be needed in both Europe and the U.S. by 2030, driven primarily by technological advancement. Demand for STEM and healthcare professionals could grow significantly whilst office support, customer service, and production work roles may decline. McKinsey estimates demand for clerks could decrease by 1.6 million jobs, plus losses of 830,000 for retail salespersons, 710,000 for administrative assistants, and 630,000 for cashiers.

For Canadian labour markets, these projections suggest several measurement priorities. First, tracking entry-level hiring rates by sector, occupation, firm size, and geography to identify where displacement is occurring most rapidly. Second, monitoring the age distribution of new hires to detect whether companies are shifting toward experienced workers. Third, analysing job posting requirements to see whether entry-level positions are being redefined to require more experience. Fourth, surveying recent graduates to understand their employment outcomes and career prospects.

This creates profound questions for educational policy. If university degrees increasingly prepare students for jobs that won’t exist or will be filled by experienced workers, the value proposition of higher education deteriorates. Current student debt loads made sense when degrees provided reliable paths to professional employment. If those paths close, debt becomes less investment than burden.

Preparing for entry-level displacement means reconsidering how workers acquire initial professional experience. Apprenticeship models, co-op programmes, and structured internships may need expansion beyond traditional trades into professional services. Educational institutions may need to provide more initial professional socialisation and skill development before graduation.

Alternative pathways into professions may need development. Possibilities include mid-career programmes that combine intensive training with guaranteed placement, government-subsidised positions that allow workers to build experience, and reformed credentialing systems that recognise diverse paths to expertise.

The model exists in healthcare, where teaching hospitals employ residents and interns despite their limited productivity, understanding that medical expertise requires supervised practice. Similar logic could apply to other professions heavily affected by AI: teaching firms, demonstration projects, and publicly funded positions that allow workers to develop professional capabilities under supervision.

Educational institutions must prepare students with capabilities AI struggles to match: complex problem-solving under ambiguity, cross-disciplinary synthesis, ethical reasoning in novel situations, and relationship-building across cultural contexts. This requires fundamental curriculum reform, moving away from content delivery toward capability development, a transformation implemented slowly

The Uncomfortable Arithmetic

Underlying all these discussions is an arithmetic that policymakers rarely state plainly: if AI can perform tasks at $1 per interaction that previously cost $8 to $15 via human labour, the economic pressure to automate is effectively irresistible in competitive markets. A firm that refuses to automate whilst competitors embrace it will find itself unable to match their pricing, productivity, or margins.

Government policy can delay this dynamic but not indefinitely prevent it. Subsidies can offset cost disadvantages temporarily. Regulations can slow deployment. But unless policy fundamentally alters the economic logic, the outcome is determined by the cost differential.

This is why focusing solely on retraining, whilst politically attractive, is strategically insufficient. Even perfectly trained workers can’t compete with systems that perform equivalent work at a fraction of the cost. The question isn’t whether workers have appropriate skills but whether the market values human labour at all for particular tasks.

The honest policy conversation would acknowledge this and address it directly. If large categories of human labour become economically uncompetitive with AI systems, societies face choices about how to distribute the gains from automation and support workers whose labour is no longer valued. This might involve shorter work weeks, stronger social insurance, public employment guarantees, or reforms to how income and wealth are taxed and distributed.

Canada’s policy discourse has not reached this level of candour. Official statements emphasise opportunity and transformation rather than displacement and insecurity. Budget allocations prioritise AI development over worker protection. Measurement systems track potential exposure rather than actual harm. The political system remains committed to the fiction that market economies with modest social insurance can manage technological disruption of this scale without fundamental reforms.

This creates a gap between policy and reality. Workers experiencing displacement understand what’s happening to them. They see entry-level positions disappearing, advancement opportunities closing, and promises of retraining ring hollow when programmes prepare them for jobs that also face automation. The disconnection between official optimism and lived experience breeds cynicism about government competence and receptivity to political movements promising more radical change.

An Honest Assessment

Canada faces AI-driven reshoring pressure that will intensify over the next decade. American policy, combining domestic content requirements with aggressive AI deployment, will pull high-value service work back to the United States whilst using automation to limit the number of workers required. Canadian service workers, particularly in customer-facing roles, back-office functions, and knowledge work occupations, will experience significant displacement.

Current Canadian policy responses are inadequate in scope, poorly targeted, and insufficiently funded. Tax incentives provide uncertain benefits. Procurement reforms face implementation challenges. Retraining programmes assume labour demand that may not materialise. Measurement systems track potential rather than actual impacts. Added-value role creation requires industrial policy capabilities that Canadian governments have largely abandoned.

The policy levers available can marginally improve outcomes but won’t prevent significant disruption. More aggressive interventions face political and administrative obstacles that make implementation unlikely in the near term.

Entry-level displacement is already underway and will accelerate. Traditional career progression pathways are breaking down. Educational institutions have not adapted to prepare students for labour markets where entry-level positions are scarce. Alternative mechanisms for acquiring professional experience remain underdeveloped.

The fundamental challenge is that AI changes the economic logic of labour markets in ways that conventional policy tools can’t adequately address. When technology can perform work at a fraction of human cost, neither training workers nor subsidising their employment provides sustainable solutions. The gains from automation accrue primarily to technology owners and firms whilst costs concentrate amongst displaced workers and communities.

Addressing this requires interventions beyond traditional labour market policy: reforms to how technology gains are distributed, strengthened social insurance, new models of work and income, and willingness to regulate markets to achieve social objectives even when this reduces economic efficiency by narrow measures.

Canadian policymakers have not demonstrated appetite for such reforms. The political coalition required has not formed. The public discourse remains focused on opportunity rather than displacement, innovation rather than disruption, adaptation rather than protection.

This may change as displacement becomes more visible and generates political pressure that can’t be ignored. But policy developed in crisis typically proves more expensive, less effective, and more contentious than policy developed with foresight. The window for proactive intervention is closing. Once reshoring is complete, jobs are eliminated, and workers are displaced, the costs of reversal become prohibitive.

The great service job reversal is not a future possibility. It’s a present reality, measurable in employment data, visible in job postings, experienced by early-career workers, and driving legislative responses in the United States. Canada can choose to respond with commensurate urgency and resources, or it can maintain current approaches and accept the consequences. But it cannot pretend the choice doesn’t exist.

References & Sources

  • 59 AI Job Statistics: Future of U.S. Jobs | National University
  • Reshoring Initiative 2024 Annual Report: 244,000 U.S. Manufacturing Jobs Were Announced in 2024 via Reshoring, FDI
  • Reshoring manufacturing to the US: The role of AI, automation and digital labor | IBM
  • 2026 Manufacturing Industry Outlook | Deloitte Insights
  • Canada | The Essex AI Policy Observatory for the World of Work | University of Essex
  • Harnessing Generative AI: Navigating the Transformative Impact on Canada’s Labour Market – IRPP
  • Budget 2024 aims to upskill Canadian workers affected by AI | Canadian HR Reporter
  • The Keep Call Centers in America Act: A Turning Point?
  • US Reshoring Bill Sparks Debate on Future of Offshore Call Centers
  • Gartner Predicts Conversational AI Will Reduce Contact Center Agent Labor Costs by $80 Billion in 2026
  • AI in Customer Service | IBM
  • AI is not just ending entry-level jobs. It’s the end of the career ladder as we know it – CNBC
  • First-of-its-kind Stanford study says AI is starting to have a ‘significant and disproportionate impact’ on entry-level workers in the U.S. | Fortune
  • Who’s Losing Jobs to AI? New Stanford Analysis Breaks It Down | TIME
  • Yes, AI is affecting employment. Here’s the data. – ADP Research
  • New study sheds light on what kinds of workers are losing jobs to AI – CBS News
  • The SignalFire State of Tech Talent Report – 2025
  • Sorry, grads: Entry-level tech jobs are getting wiped out – SF Standard
  • Exposure to artificial intelligence in Canadian jobs: Experimental estimates – Statistics Canada
  • Experimental Estimates of Potential Artificial Intelligence Occupational Exposure in Canada – Statistics Canada
  • What’s in#Budget2024 for Canadian tech? | BetaKit
  • Deputy Prime Minister announces action to protect and create good-paying jobs for Canadian workers – Canada.ca
  • Federal Budget 2024: AI Update – McCarthy Tétrault
  • Canadian Sovereign AI Compute Strategy – Innovation, Science and Economic Development Canada
  • Federal government outlines $2 billion in AI compute spending commitment | BetaKit
  • The state of AI in 2025: Agents, innovation, and transformation – McKinsey
  • Generative AI and the future of work in America | McKinsey
  • The economic potential of generative AI: The next productivity frontier – McKinsey
  • Keep Call Centers in America Act of 2025 – Senator Gallego
  • Gallego, Justice Introduce Bipartisan Bill to Protect Americans’ Access to Quality Customer Service – Senator Ruben Gallego
  • Can retraining programs ease the fear of AI job loss? – Northeastern University
  • Jobs Lost to Automation Statistics in 2024 | TeamStage
  • Jobs lost, jobs gained: What the future of work will mean for jobs, skills, and wages – McKinsey
  • AI Voice Agents: 2025 Update | Andreessen Horowitz
  • Voice AI Technology: Cutting Call Center Costs by 60% in 2025 | SideTool
  • Reshoring Initiative 2024 Annual Report, Plus 1Q2025 | Reshoring Initiative
  • Reshoring Reality: What’s Fueling the Manufacturing Revival? | Camoin Associates
  • AI Job Displacement 2025: Which Jobs Are At Risk? | Final Round AI
  • 73 AI Job Replacement Statistics (2025 Reports & Data) | DemandSage
  • The Future of Jobs Report 2025 | World Economic Forum
  • Artificial Intelligence 2025 – Canada | Chambers and Partners

Tim Green

Tim Green
UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795
Email: tim@smarterarticles.co.uk

TCP Connections With DAP Debuggers, Different Formats for Numeric Values, and More in CLion 2026.1 EAP

The Early Access Program (EAP) for CLion 2026.1 is nearing its end, bringing a range of improvements to debugging capabilities, build tools, project formats, and more. This post is a brief overview of what is already available in the latest EAP build. As always, EAP builds are free to use, so you can explore all the new features at no cost before the stable release.

DOWNLOAD CLION 2026.1 EAP

Debugger

Communicating with DAP debuggers over TCP. In CLion 2025.3, we introduced support for the Debug Adapter Protocol (DAP). It allows CLion to communicate with a range of debuggers beyond LLDB and GDB. Now, in addition to stdin/stdout, we’ve added support for TCP connections with DAP debuggers.

TCP support gives you more flexibility in choosing DAP debuggers to work with, including those that only work via TCP. You can now also choose between two modes: Launch and Attach, depending on which one your DAP debugger requires. To learn more about configuring a DAP debugger and specific settings, read the documentation.

Viewing numeric values in different formats. When examining the suspended program, you can now change the number format for individual variables, switching between decimal, hexadecimal, octal, or binary. This allows you to see values in a format better suited for a specific use case, whether it’s a human-readable number, a memory address, or file permissions.

To change the number format, right-click a variable in the Threads & Variables pane, select View as…, and then select the desired format. You can also change the padding format in the same menu.

We encourage you to try the feature and let us know what you think in CPP-12303.

Faster debugging in remote development mode. Debugging in remote development scenarios is now much more responsive and stable, thanks to a completely reworked debugger architecture. The Debug tool window and breakpoints are now rendered on the IDE frontend, while the backend hosts the active debugger session and communicates with the target process. Note that we’re still addressing some issues and will continue refining this feature.

Natvis performance improvements. We’ve achieved dramatic performance improvements for the LLDB-based custom debugger when using Natvis expressions with the MSVC toolchain. Internal tests have shown debugging speed improvements of more than 80x and a 2.5x reduction in memory usage. This is especially beneficial for developers working with large projects that rely heavily on the Natvis framework. These improvements benefit users of both CLion and Rider. For a detailed technical breakdown of how we achieved these gains, check out the Rider team’s blog post.

Updated bundled LLDB. The bundled LLDB version available for macOS and Linux users has been updated from 19.1.7 to 21.1.7, bringing the latest debugger improvements and bug fixes from the LLVM project. See the LLDB release notes for detailed information about what’s new in the debugger.

Build tools and project formats

Support for custom project models. CLion now offers an easy way to set up or fine-tune code insight for all types of projects – including those based on unsupported project formats – and for non-project files, too. This feature also simplifies migration from VS Code for users who already work with C/C++ properties, making the transition to CLion even smoother. You can open projects that were previously edited in VS Code, and CLion will recognize settings from the c_cpp_properties.json file. You can even tweak the settings in this file, and CLion will apply them.

Code insight features for external projects. The IDE can now provide full code insight for external projects defined in the CMake ExternalProject_Add() section. CLion loads these projects as part of the primary CMake project. This gives you access to error detection, warnings, a search for usages, and refactoring capabilities without the need to load external projects separately. This update is particularly valuable for embedded frameworks such as Zephyr, STM32, and ESP-IDF, where projects are often split into multiple parts.

The reduced CLion Nova installer size. We’ve significantly reduced CLion’s disk footprint. After installation, the bundled C/C++ Language Support plugin now consumes 50% less disk space on average across all platforms. Overall, the total IDE footprint on disk has been reduced by 1 GB.

Improvements to CMake support: 

  • You can now specify command-line options for CMake profiles faster, thanks to code completion in the CMake options and Build options fields. Simply start typing an option, and a completion list will appear. Select the desired option from the list.
  • CMake preset names that you see in the IDE’s UI are now based on the displayName value specified in CMakePresets.json – instead of the name value as before. This means that you can now use more human-friendly, descriptive names for your CMake presets and see them in the CMake settings, tool window, and toolbar widget.

Language support

Unit testing support for Meson. We’ve made significant progress in making unit test integration independent of the CMake project format. All four major test frameworks – GoogleTest, Catch2, Boost.Test, and doctest – are now fully supported for Meson projects. This means you can enjoy the same comprehensive testing functionality that was previously available only for CMake projects, including running tests directly from the editor, viewing test results in a dedicated tool window, and navigating between tests and their implementations.

Improvements to CLion Nova code folding. The IDE automatically recognizes certain code structures in the editor and makes them foldable for better code organization. Previously, our default language engine, CLion Nova, had fewer code-folding options than the legacy CLion Classic. Now, the default engine offers full feature parity, making code navigation and organization more intuitive and aligned with what CLion Classic users have come to expect.

Try the EAP now

The CLion 2026.1 EAP is available for download now and is completely free to use. We encourage you to try out these new features and share your feedback with us in the comments below or in our issue tracker. Your input during the EAP helps us identify issues and refine functionality before the stable release.

DOWNLOAD CLION 2026.1 EAP

Hashtag Jakarta EE #323

Hashtag Jakarta EE #323

Welcome to issue number three hundred and twenty-three of Hashtag Jakarta EE!

Right now, I am on my way home from Devnexus 2026. It was as busy as always, so I haven’t been able to finish up my post from the event, but I promise that it will be out there shortly. I will only have 24 hours at home this time before I am headed to JavaLand 2026. They have changed venue again and is going to be in Europa-Park in southern Germany. Hopefully this change of venue will be more successful than the last couple of editions at Nürburgring.

Another thing that hapened last week was that I became an IBM Champion. It makes sense since I am often using IBM technology such as Open Liberty in my demos at conferences around the World. Since I am new to the program, I don’t really know what it will mean, but I am excited to find outl.

The work on Jakarta EE 12 continues. In the weekly platform call, the progress is discussed and the projects for the individual component specifications report on what they are focusing at the moment. Due to inactivity in some of the projects, and recent layoffs among our member companies, the team is discussing how to bring on new committers and get them up to speed as fast as possible.

If you are still waiting for the follow-up post from my Will AI Kill Open Source post a couple of weeks ago, don’t despair. I have so much material for the next post and just need a little breathing room to organise my thoughts. In the context of this theme, I have done some pretty cool experiments that I am very eager to share once they are in a sharable state. Stay tuned…

Ivar Grimstad


191 Demos. 0 Signups. Then We Removed the Telegram Requirement.

Last week we wrote about why 191 developers tried our AI messenger and nobody signed up. Our hypothesis: the value proposition wasn’t clear — developers saw a chat interface but didn’t understand why it was different from any other chatbot.

We were half right.

What We Missed

While analyzing the funnel, we found a second friction point we’d overlooked: authentication was Telegram-only.

To sign up for Agenium Messenger, you had to:

  1. Have a Telegram account
  2. Authorize our bot
  3. Complete a multi-step mobile polling flow

For developers building on Google A2A, MCP, or custom agent stacks — many of whom don’t use Telegram daily — this was a wall. Not an insurmountable one, but enough friction to produce a predictable result: try the demo as a guest, see something mildly interesting, leave.

We shipped email magic link authentication this week. No Telegram required. One email address, one click, done.

Why Email Matters More for Agents Than for Humans

Here’s the insight that surprised us.

For human apps, email vs. Telegram vs. GitHub OAuth is mostly UX preference. You’re logging a human in.

For agent infrastructure, email is different. It’s not just a login method — it’s an identity anchor.

An AI agent needs an address that:

  • Is portable across model upgrades (the agent is still “the same agent” when you switch from GPT-4 to Claude)
  • Is resolvable by other agents that have never met it before
  • Carries trust signals — DMARC records, sending reputation, DKIM signing — that don’t exist with arbitrary auth tokens

Email has decades of trust infrastructure baked in. When agent@yourdomain.com sends a message, there’s a verifiable chain of custody (DNS records, signing keys, bounce history) that a randomly-generated OAuth token simply doesn’t have.

We didn’t ship email auth just to reduce friction. We shipped it because email is a better identity primitive for the agent web than anything else that currently exists at scale.

What the Numbers Show

Before email auth (first 10 days):

  • 191 demo sessions started
  • 0 signups
  • Auth method available: Telegram only

After email auth (first 48 hours):

  • Email auth live at chat.agenium.net
  • Guest mode: one click, no auth required
  • Email option: magic link, no password

It’s too early to report a conversion number. But we can report that two things are now true that weren’t before:

  1. A developer who has never opened Telegram can sign up in under 30 seconds
  2. The agent address they get (username@agenium.net) is email-anchored and resolvable

The Discovery Layer Connection

This matters for the broader A2A/MCP ecosystem because agent discovery has a chicken-and-egg problem with identity.

When Agent A wants to find Agent B in a registry, it needs to look up something stable — an address that will still be valid six months from now, after deployments, model upgrades, and server migrations.

agent://username.agenium.net solves this. The addressing layer is separate from the compute layer. You can swap the model underneath without changing the address that other agents use to find you.

Email is the most natural way to bootstrap that address:

  • Humans already have email identity
  • Email domains are owned and verified
  • Email-based authentication creates an implicit link between a human (or organization) and their agent’s public address

What We’re Shipping Next

The current state of Agenium Messenger:

  • ✅ Guest mode (no account, instant demo)
  • ✅ Email magic link auth
  • ✅ Telegram Login (for TG-native users)
  • ⏳ GitHub OAuth (credentials being configured)
  • ✅ Stable agent:// addresses
  • ✅ A2A-compatible Agent Cards

We’re building toward one goal: every AI agent deserves a stable, resolvable address that persists across the agent’s lifetime — not its deployment’s lifetime.

Try it: chat.agenium.net

No Telegram required.

Building Agenium in public — the DNS layer for the agent web. Follow along at @AgeniumPlatform.