JakartaOne by Jozi-JUG 2026

JakartaOne by Jozi-JUG 2026

When I Code Java was cancelled with short notice,Phillip, Buhake and I scrambled and created a substitute event. With funds from the Eclipse Foundation concept of Open Community Meetup and the organisation of Jozi-JUG, we created JakartaOne by Jozi-JUG where Phillip and I presented. The event had 208 registered attendees.

I started the evening by presenting The Past, Present, and Future of Enterprise Java. Phillip took over after me and presented AI with (and in) Quarkus. We were hosted by Investec, who also provided food and drinks for us. After the talks prices and swag were raffled out to the attendees. This is always very appreciated.

This was the first JakartaOne by [JUG] we have done, but it certainly won’t be the last. We even discussed turning it into a half-day or full-day conference and brand it as JakartaOne South Africa or JakartaOne Johannesburg next year.

Ivar Grimstad


Codebase Intelligence

Navigating a new repository can be overwhelming. I built “Codebase Intelligence” tool to turn static code into an interactive knowledge base using Retrieval-Augmented Generation. Instead of the AI guessing what your code does, it reads the relevant files before answering.

By using semantic search and vector embeddings, you can ask questions like:
“How is the authentication flow handled?”
“Where are the API routes defined?”
Get a context-aware answer backed by your actual code.

I reached some key milestones while building this tool: automated an ingestion pipeline using LangChain and OpenAI embedding model to fetch, chunk, and embed GitHub repos. Leveraged Pinecone vector database for high-performance semantic search and metadata filtering. Integrated GPT 4.0 and Vercel AI SDK to manage the conversation flow. Implemented GitHub Actions to handle automated daily maintenance and cleanup of the database.

Check it out here: https://codebase-intelligence-nu.vercel.app/

Open Source and Contributions 🌟
I’ve made this tool open source! Whether you want to use it for your own repos or help improve the ingestion logic, feel free to check out the code or create an Issue.

Github Repository Link: https://github.com/nancy-kataria/codebase-intelligence


MCP for .NET: AI Agents That Can Actually Do Things

Crysta is an AI app where you can build your own AI agents for real business use.

What makes it powerful is this: you are not limited to the default behavior of a chatbot.
You can connect different MCP servers and give your agent practical capabilities like scheduling, reservations, and data access.

In short, Crysta helps you build an AI agent that does more than talk, it can actually get work done.

What Users Can Connect

With MCP, an agent can connect to tools such as:

  • Google Calendar workflows
  • Reservation and scheduling systems
  • Business data tools exposed through MCP
  • Knowledge tools like DeepWiki (through MCP)

So instead of only answering a request like “book a meeting,” the agent can actually do it through a connected tool.

Why This Matters for .NET Developers

MCP gives .NET developers a straightforward way to add real capabilities to an AI agent without building every integration yourself. Instead of wiring each API directly into your app, you connect MCP‑compatible tools and let the agent use them through the same function‑calling flow.

This leads to two big benefits:

  • Faster development: new features come from adding MCP tools, not writing new integrations.
  • Cleaner architecture: your agent grows by adding tools, not by adding more prompt logic or custom glue code.

How We Built It (High Level)

We kept the flow simple:

  1. User adds an MCP server link to an agent.
  2. System validates the link and reads tool metadata.
  3. Connected MCP tools become part of that agent’s available actions.
  4. During chat, the model can call those tools when needed.
  5. We track failures and connection health so users can fix broken links quickly.

Packages We Use for This

These are the key packages behind this MCP workflow:

  • ModelContextProtocol
  • Microsoft.Extensions.AI
  • Microsoft.Extensions.AI.OpenAI
  • Azure.AI.OpenAI

This combination gives us MCP tool discovery plus a solid function-calling pipeline in .NET.

Small Code Examples (Based on Our Code)

Here’s a sample snippet that reflects the structure we use in our production pattern.

1) MCP client + validation service

using ModelContextProtocol.Client;

public class McpServerService : IMcpServerService
{
    public async Task<StoredMcpServer> ValidateAndGetMcpInfoAsync(string mcpUrl)
    {
        if (!Uri.TryCreate(mcpUrl, UriKind.Absolute, out _))
        {
            return new StoredMcpServer { IsValid = false, ErrorMessage = "Invalid URL format" };
        }

        try
        {
            var transport = new HttpClientTransport(
                new HttpClientTransportOptions { Endpoint = new Uri(mcpUrl) },
                loggerFactory: null);

            var client = await McpClient.CreateAsync(transport);
            var serverInfo = client.ServerInfo;

            return new StoredMcpServer
            {
                Url = mcpUrl,
                Title = serverInfo?.Name ?? "Unknown MCP Server",
                Description = serverInfo?.Title ?? "No description available",
                AddedDate = DateTime.Now,
                IsValid = serverInfo is not null
            };
        }
        catch (Exception ex)
        {
            return new StoredMcpServer { IsValid = false, ErrorMessage = ex.Message };
        }
    }

    public async Task<McpClient> CreateClientAsync(string mcpUrl, CancellationToken cancellationToken = default)
    {
        var transport = new HttpClientTransport(
            new HttpClientTransportOptions { Endpoint = new Uri(mcpUrl) },
            loggerFactory: null);
        return await McpClient.CreateAsync(transport);
    }
}

2) Load MCP tools from agent skills (runtime)

using Microsoft.Extensions.AI;
using ModelContextProtocol.Client;

private async Task<List<AIFunction>> GetMcpToolsAsync()
{
    if (interactiveAgent?.Skills == null) return [];

    var allTools = new List<AIFunction>();
    using var scope = ServiceScopeFactory.CreateScope();
    var mcpServerService = scope.ServiceProvider.GetRequiredService<IMcpServerService>();

    foreach (var skill in interactiveAgent.Skills)
    {
        if (skill.SkillType != SkillType.MCP) continue;
        if (skill.SkillJson is null) continue;

        var check = await mcpServerService.ValidateAndGetMcpInfoAsync(skill.SkillJson);
        if (!check.IsValid) continue;

        var client = await mcpServerService.CreateClientAsync(skill.SkillJson);
        var tools = await client.ListToolsAsync();
        allTools.AddRange(tools);
    }

    return allTools;
}

3) Attach MCP tools to Microsoft.Extensions.AI chat pipeline

using System.ClientModel;
using Azure.AI.OpenAI;
using Microsoft.Extensions.AI;

var azureChatClient = new AzureOpenAIClient(
    new Uri(endpoint),
    new ApiKeyCredential(apiKey))
    .GetChatClient(deployment)
    .AsIChatClient();

var mcpTools = await GetMcpToolsAsync();

var chatClient = new ChatClientBuilder(azureChatClient)
    .ConfigureOptions(options =>
    {
        options.Tools ??= [];
        foreach (var tool in mcpTools)
        {
            options.Tools.Add(tool);
        }
    })
    .UseFunctionInvocation(c => c.AllowConcurrentInvocation = true)
    .Build();

This is the exact idea we use: validate MCP links, discover tools, then pass those tools into the chat client so the agent can call them.

Closing

MCP moved our product from “AI that answers” to “AI that acts.”

That is the difference users notice immediately.

If you want to see the end result, check it here: https://crystacode.ai

AWS Won’t Stop Charging You. Ever. Deploy Budget Alerts as Code with Terraform Before It’s Too Late 🔥

AWS won’t stop charging you when your budget runs out. A leaked key, a forgotten GPU instance, a runaway Lambda – and the bill arrives 30 days later. Here’s how to deploy budget alerts, Slack notifications, anomaly detection, and an automatic kill switch with Terraform.

A dev team got an $89K bill overnight after committing API keys to GitHub – bots found them in 4 minutes and spun up 500 GPU instances for crypto mining. A dev left a SageMaker notebook running over the holidays – $4,800 gone. A misconfigured Auto Scaling group spun up 200 instances overnight. Setting a budget in the console takes 5 minutes but nobody does it. Here’s how to make it impossible to forget. 💀

AWS does not cap your spending by default. There’s no hard limit. No guardrails. Your account is an open credit line to Amazon, and if something goes wrong – a runaway Lambda, a misconfigured Auto Scaling group, a leaked IAM access key – you won’t know until the invoice lands in your inbox. 😱

The fix? Budgets, alerts, anomaly detection, and automated actions – all deployed as code so every account gets them from day one.

💸 The 4 Layers of Cost Protection

Most teams stop at Layer 1. That’s why they still get surprised.

Layer What It Does Response Time
Budget Alerts (Email) Emails billing admins at thresholds Hours (someone reads the email)
SNS → Slack Posts to your team channel instantly Minutes (someone sees Slack)
Cost Anomaly Detection ML-powered spike detection Hours (catches the weird stuff) 🤖
Budget Actions Auto-applies deny policies or stops instances Seconds (fully automated) 🛡️

Let’s deploy all four.

📧 Layer 1: Budget Alerts with Terraform

The aws_budgets_budget resource is the foundation. This sets up email alerts at 50%, 80%, and 100% of your monthly budget:

resource "aws_budgets_budget" "monthly" {
  name         = "${var.account_alias}-monthly-budget"
  budget_type  = "COST"
  limit_amount = var.monthly_budget
  limit_unit   = "USD"
  time_unit    = "MONTHLY"

  cost_types {
    include_tax          = true
    include_subscription = true
    include_support      = true
    include_discount     = false
    include_refund       = false
    include_credit       = false
    use_blended          = false
  }

  # Alert at 50% actual spend
  notification {
    comparison_operator       = "GREATER_THAN"
    threshold                 = 50
    threshold_type            = "PERCENTAGE"
    notification_type         = "ACTUAL"
    subscriber_email_addresses = var.alert_emails
  }

  # Alert at 80% actual spend
  notification {
    comparison_operator       = "GREATER_THAN"
    threshold                 = 80
    threshold_type            = "PERCENTAGE"
    notification_type         = "ACTUAL"
    subscriber_email_addresses = var.alert_emails
  }

  # Alert at 100% actual spend
  notification {
    comparison_operator       = "GREATER_THAN"
    threshold                 = 100
    threshold_type            = "PERCENTAGE"
    notification_type         = "ACTUAL"
    subscriber_email_addresses = var.alert_emails
  }

  # Alert at 90% FORECASTED spend (early warning!) 👈
  notification {
    comparison_operator       = "GREATER_THAN"
    threshold                 = 90
    threshold_type            = "PERCENTAGE"
    notification_type         = "FORECASTED"
    subscriber_email_addresses = var.alert_emails
  }
}

variable "monthly_budget" {
  type        = string
  description = "Monthly budget in USD"
  default     = "1000"
}

variable "account_alias" {
  type        = string
  description = "Account alias for naming"
}

variable "alert_emails" {
  type        = list(string)
  description = "Email addresses for budget alerts"
}

⚠️ Critical gotcha: FORECASTED alerts warn you before you hit the limit by projecting current trends to end of month. Most teams only set ACTUAL alerts and get notified after the money is already gone. AWS needs ~5 weeks of usage data to generate forecasts, so set these up early. Always include at least one forecasted threshold.

Cost: The first two budgets per account are free. Additional budgets cost $0.02/day (~$0.62/month). There’s basically zero excuse not to have them. 🎯

🔔 Layer 2: SNS → Slack Notifications

Email alerts get buried. Slack alerts get seen. Here’s the full pipeline:

Budget → SNS Topic → Lambda → Slack Webhook

Step 1: Create the SNS topic and wire it to the budget

resource "aws_sns_topic" "budget_alerts" {
  name = "budget-alerts"

  tags = {
    Environment = "shared"
    ManagedBy   = "terraform"
  }
}

# SNS topic policy — allow AWS Budgets to publish
resource "aws_sns_topic_policy" "budget_alerts" {
  arn = aws_sns_topic.budget_alerts.arn

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Sid       = "AllowBudgetsPublish"
        Effect    = "Allow"
        Principal = { Service = "budgets.amazonaws.com" }
        Action    = "SNS:Publish"
        Resource  = aws_sns_topic.budget_alerts.arn
        Condition = {
          StringEquals = {
            "aws:SourceAccount" = data.aws_caller_identity.current.account_id
          }
        }
      }
    ]
  })
}

data "aws_caller_identity" "current" {}

# Update the budget to publish to SNS
resource "aws_budgets_budget" "monthly_with_sns" {
  name         = "${var.account_alias}-monthly-budget"
  budget_type  = "COST"
  limit_amount = var.monthly_budget
  limit_unit   = "USD"
  time_unit    = "MONTHLY"

  cost_types {
    include_tax     = true
    include_support = true
    include_credit  = false
    include_refund  = false
    use_blended     = false
  }

  notification {
    comparison_operator        = "GREATER_THAN"
    threshold                  = 50
    threshold_type             = "PERCENTAGE"
    notification_type          = "ACTUAL"
    subscriber_email_addresses = var.alert_emails
    subscriber_sns_topic_arns  = [aws_sns_topic.budget_alerts.arn]
  }

  notification {
    comparison_operator        = "GREATER_THAN"
    threshold                  = 80
    threshold_type             = "PERCENTAGE"
    notification_type          = "ACTUAL"
    subscriber_email_addresses = var.alert_emails
    subscriber_sns_topic_arns  = [aws_sns_topic.budget_alerts.arn]
  }

  notification {
    comparison_operator        = "GREATER_THAN"
    threshold                  = 100
    threshold_type             = "PERCENTAGE"
    notification_type          = "ACTUAL"
    subscriber_email_addresses = var.alert_emails
    subscriber_sns_topic_arns  = [aws_sns_topic.budget_alerts.arn]
  }

  notification {
    comparison_operator        = "GREATER_THAN"
    threshold                  = 90
    threshold_type             = "PERCENTAGE"
    notification_type          = "FORECASTED"
    subscriber_email_addresses = var.alert_emails
    subscriber_sns_topic_arns  = [aws_sns_topic.budget_alerts.arn]
  }
}

Step 2: Deploy a Lambda function that posts to Slack

# IAM role for Lambda
data "aws_iam_policy_document" "lambda_assume" {
  statement {
    actions = ["sts:AssumeRole"]
    principals {
      type        = "Service"
      identifiers = ["lambda.amazonaws.com"]
    }
  }
}

resource "aws_iam_role" "budget_slack_lambda" {
  name               = "budget-alert-slack-lambda"
  assume_role_policy = data.aws_iam_policy_document.lambda_assume.json
}

resource "aws_iam_role_policy_attachment" "lambda_basic" {
  role       = aws_iam_role.budget_slack_lambda.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}

# Lambda function
data "archive_file" "budget_slack" {
  type        = "zip"
  source_file = "${path.module}/functions/budget_slack.py"
  output_path = "${path.module}/functions/budget_slack.zip"
}

resource "aws_lambda_function" "budget_slack" {
  filename         = data.archive_file.budget_slack.output_path
  source_code_hash = data.archive_file.budget_slack.output_base64sha256
  function_name    = "budget-alert-to-slack"
  role             = aws_iam_role.budget_slack_lambda.arn
  handler          = "budget_slack.lambda_handler"
  runtime          = "python3.12"
  timeout          = 30
  memory_size      = 128

  environment {
    variables = {
      SLACK_WEBHOOK_URL = var.slack_webhook_url
    }
  }
}

# SNS → Lambda subscription
resource "aws_sns_topic_subscription" "budget_to_slack" {
  topic_arn = aws_sns_topic.budget_alerts.arn
  protocol  = "lambda"
  endpoint  = aws_lambda_function.budget_slack.arn
}

resource "aws_lambda_permission" "sns_invoke" {
  statement_id  = "AllowSNSInvoke"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.budget_slack.function_name
  principal     = "sns.amazonaws.com"
  source_arn    = aws_sns_topic.budget_alerts.arn
}

The Lambda function code (Python):

# budget_slack.py
import json
import os
import urllib.request

def lambda_handler(event, context):
    """Triggered by SNS budget notification."""
    for record in event["Records"]:
        message = json.loads(record["Sns"]["Message"])

        # AWS Budget SNS messages have a specific format
        account   = message.get("account", "Unknown")
        budget    = message.get("budgetName", "Unknown")
        threshold = message.get("threshold", "?")
        actual    = message.get("actualAmount", "?")
        limit     = message.get("budgetLimit", "?")
        unit      = message.get("unit", "USD")

        # Pick emoji based on threshold
        pct = float(threshold) if threshold != "?" else 0
        if pct >= 100:
            emoji = "🔴"
        elif pct >= 80:
            emoji = "🟠"
        else:
            emoji = "🟡"

        slack_message = {
            "text": (
                f"{emoji} *AWS Budget Alert*n"
                f"• Budget: *{budget}*n"
                f"• Spent: *{actual} {unit}* of {limit} {unit} "
                f"(*{threshold}%* threshold crossed)n"
                f"• Account: {account}"
            )
        }

        webhook_url = os.environ["SLACK_WEBHOOK_URL"]
        req = urllib.request.Request(
            webhook_url,
            data=json.dumps(slack_message).encode(),
            headers={"Content-Type": "application/json"},
        )
        urllib.request.urlopen(req)

Now your team sees this in Slack the moment a threshold is crossed:

🟠 AWS Budget Alert
• Budget: payment-api-prod-monthly-budget
• Spent: $812.50 USD of $1,000.00 USD (80% threshold crossed)
• Account: 123456789012

🔬 Layer 3: Cost Anomaly Detection (Catch the Weird Stuff)

Budgets catch gradual overspend. But what about sudden spikes? A developer accidentally launches 8 p4d.24xlarge GPU instances instead of t3.medium? That’s $250/hour vs $0.04/hour — and a budget alert might not fire until the damage is done.

AWS Cost Anomaly Detection uses ML to spot unusual patterns. Deploy it in a few resource blocks:

# Monitor all services for anomalies
resource "aws_ce_anomaly_monitor" "service_monitor" {
  name              = "all-services-anomaly-monitor"
  monitor_type      = "DIMENSIONAL"
  monitor_dimension = "SERVICE"

  tags = {
    ManagedBy = "terraform"
  }
}

# SNS topic for anomaly alerts
resource "aws_sns_topic" "anomaly_alerts" {
  name = "cost-anomaly-alerts"
}

resource "aws_sns_topic_policy" "anomaly_alerts" {
  arn = aws_sns_topic.anomaly_alerts.arn

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Sid       = "AllowCostAnomalyPublish"
        Effect    = "Allow"
        Principal = { Service = "costalerts.amazonaws.com" }
        Action    = "SNS:Publish"
        Resource  = aws_sns_topic.anomaly_alerts.arn
      }
    ]
  })
}

# Subscribe to anomaly alerts — notify when impact > $100 AND > 20%
resource "aws_ce_anomaly_subscription" "alerts" {
  name      = "cost-anomaly-alerts"
  frequency = "IMMEDIATE"

  monitor_arn_list = [
    aws_ce_anomaly_monitor.service_monitor.arn
  ]

  subscriber {
    type    = "SNS"
    address = aws_sns_topic.anomaly_alerts.arn
  }

  # Alert when BOTH conditions are true:
  # - Anomaly cost impact ≥ $100 (ignore trivial spikes)
  # - Anomaly cost impact ≥ 20% above baseline (catch real anomalies)
  threshold_expression {
    and {
      dimension {
        key           = "ANOMALY_TOTAL_IMPACT_ABSOLUTE"
        match_options = ["GREATER_THAN_OR_EQUAL"]
        values        = ["100"]
      }
    }
    and {
      dimension {
        key           = "ANOMALY_TOTAL_IMPACT_PERCENTAGE"
        match_options = ["GREATER_THAN_OR_EQUAL"]
        values        = ["20"]
      }
    }
  }
}

💡 Pro tip: Use BOTH ANOMALY_TOTAL_IMPACT_ABSOLUTE AND ANOMALY_TOTAL_IMPACT_PERCENTAGE together. Percentage-only alerts fire on a $5 → $10 spike (100% increase, but who cares). Absolute-only alerts miss a $1,000 → $1,200 spike on a high-spend account. Combining both eliminates noise and catches real problems.

Now you’ll get alerts when AWS detects anomalies like:

⚠️ Cost Anomaly Detected
Service: Amazon Elastic Compute Cloud
Impact: +$847/day above baseline
Root Cause: Unusual number of running instances in us-east-1
Severity: High

Cost: Anomaly Detection is free. Zero excuse not to have it. 🎯

☠️ Layer 4: Budget Actions — The Kill Switch

This is AWS’s native automated response system. When a budget threshold is breached, AWS can automatically apply a deny IAM policy to prevent further resource creation, or stop specific EC2/RDS instances. No Lambda required.

Option A: Auto-apply a deny policy (block new resource creation)

# The deny policy — prevents launching new EC2 and RDS instances
resource "aws_iam_policy" "deny_ec2_rds_create" {
  name = "budget-deny-ec2-rds-create"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Sid    = "DenyNewCompute"
        Effect = "Deny"
        Action = [
          "ec2:RunInstances",
          "ec2:StartInstances",
          "ec2:CreateVolume",
          "rds:CreateDBInstance",
          "rds:StartDBInstance",
          "sagemaker:CreateNotebookInstance",
          "sagemaker:StartNotebookInstance"
        ]
        Resource = "*"
      }
    ]
  })
}

# IAM role for AWS Budgets to execute actions
data "aws_iam_policy_document" "budgets_assume" {
  statement {
    actions = ["sts:AssumeRole"]
    principals {
      type        = "Service"
      identifiers = ["budgets.amazonaws.com"]
    }
    condition {
      test     = "StringEquals"
      variable = "aws:SourceAccount"
      values   = [data.aws_caller_identity.current.account_id]
    }
    condition {
      test     = "ArnLike"
      variable = "aws:SourceArn"
      values   = ["arn:aws:budgets::${data.aws_caller_identity.current.account_id}:budget/*"]
    }
  }
}

resource "aws_iam_role" "budgets_action" {
  name               = "budgets-action-execution-role"
  assume_role_policy = data.aws_iam_policy_document.budgets_assume.json
}

resource "aws_iam_role_policy" "budgets_action" {
  name = "budgets-action-permissions"
  role = aws_iam_role.budgets_action.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Action = [
          "iam:AttachGroupPolicy",
          "iam:AttachRolePolicy",
          "iam:AttachUserPolicy",
          "iam:DetachGroupPolicy",
          "iam:DetachRolePolicy",
          "iam:DetachUserPolicy"
        ]
        Resource = "*"
      }
    ]
  })
}

# The Budget Action — auto-applies deny policy at 100% spend
resource "aws_budgets_budget_action" "deny_on_exceed" {
  count = var.environment == "prod" ? 0 : 1  # 👈 Never in prod!

  budget_name        = aws_budgets_budget.monthly_with_sns.name
  action_type        = "APPLY_IAM_POLICY"
  approval_model     = "AUTOMATIC"
  notification_type  = "ACTUAL"
  execution_role_arn = aws_iam_role.budgets_action.arn

  action_threshold {
    action_threshold_type  = "PERCENTAGE"
    action_threshold_value = 100
  }

  definition {
    iam_action_definition {
      policy_arn = aws_iam_policy.deny_ec2_rds_create.arn
      roles      = var.dev_role_names  # Roles to restrict
    }
  }

  subscriber {
    subscription_type = "EMAIL"
    address           = var.alert_emails[0]
  }
}

⚠️ WARNING: Budget Actions that apply IAM policies block users from creating new resources until the next budget period. This is intentional for dev/staging. NEVER use AUTOMATIC approval on production — use MANUAL instead so a human reviews the action before it executes.

The count = var.environment == "prod" ? 0 : 1 is your safety net — this action literally cannot exist in a production account. 🛡️

Option B: Auto-stop specific EC2/RDS instances (surgical kill switch)

resource "aws_budgets_budget_action" "stop_instances" {
  count = var.environment == "prod" ? 0 : 1

  budget_name        = aws_budgets_budget.monthly_with_sns.name
  action_type        = "RUN_SSM_DOCUMENTS"
  approval_model     = "AUTOMATIC"
  notification_type  = "ACTUAL"
  execution_role_arn = aws_iam_role.budgets_action_ssm.arn

  action_threshold {
    action_threshold_type  = "PERCENTAGE"
    action_threshold_value = 100
  }

  definition {
    ssm_action_definition {
      action_sub_type = "STOP_EC2_INSTANCES"
      region          = var.region
      instance_ids    = var.killable_instance_ids
    }
  }

  subscriber {
    subscription_type = "EMAIL"
    address           = var.alert_emails[0]
  }
}

💡 Key difference from GCP/Azure: AWS Budget Actions are native — no Lambda or Cloud Function required. AWS can directly apply IAM policies, SCPs (in Organizations), or stop EC2/RDS instances. Policy-based actions auto-reset at the start of the next budget period. Instance stop actions do NOT auto-reset — instances stay stopped until manually restarted.

📊 The Multi-Account Budget Matrix

Real companies don’t have one account. They have dozens. Here’s how to budget all of them from a single Terraform module:

variable "account_budgets" {
  type = map(object({
    account_id     = string
    monthly_budget = string
    environment    = string
    alert_emails   = list(string)
  }))
  default = {
    "payment-api-prod" = {
      account_id     = "111111111111"
      monthly_budget = "5000"
      environment    = "prod"
      alert_emails   = ["finops@company.com", "payments-lead@company.com"]
    }
    "payment-api-dev" = {
      account_id     = "222222222222"
      monthly_budget = "500"
      environment    = "dev"
      alert_emails   = ["eng-lead@company.com"]
    }
    "ml-pipeline-staging" = {
      account_id     = "333333333333"
      monthly_budget = "2000"
      environment    = "staging"
      alert_emails   = ["ml-team@company.com"]
    }
  }
}

resource "aws_budgets_budget" "per_account" {
  for_each = var.account_budgets

  name         = "${each.key}-budget"
  budget_type  = "COST"
  limit_amount = each.value.monthly_budget
  limit_unit   = "USD"
  time_unit    = "MONTHLY"

  cost_filter {
    name   = "LinkedAccount"
    values = [each.value.account_id]
  }

  notification {
    comparison_operator        = "GREATER_THAN"
    threshold                  = 50
    threshold_type             = "PERCENTAGE"
    notification_type          = "ACTUAL"
    subscriber_email_addresses = each.value.alert_emails
  }

  notification {
    comparison_operator        = "GREATER_THAN"
    threshold                  = 80
    threshold_type             = "PERCENTAGE"
    notification_type          = "ACTUAL"
    subscriber_email_addresses = each.value.alert_emails
  }

  notification {
    comparison_operator        = "GREATER_THAN"
    threshold                  = 100
    threshold_type             = "PERCENTAGE"
    notification_type          = "ACTUAL"
    subscriber_email_addresses = each.value.alert_emails
  }

  notification {
    comparison_operator        = "GREATER_THAN"
    threshold                  = 90
    threshold_type             = "PERCENTAGE"
    notification_type          = "FORECASTED"
    subscriber_email_addresses = each.value.alert_emails
  }
}

Add a new account? Add one entry to the map. terraform apply. Done. Every account gets identical protection. ✅

🧱 Per-Service Budgets (Catch the Expensive Outliers)

Some AWS services are particularly dangerous. EC2, SageMaker, and RDS can rack up thousands overnight. Create targeted budgets for your riskiest services:

variable "service_budgets" {
  type = map(object({
    service_name   = string
    monthly_budget = string
  }))
  default = {
    ec2 = {
      service_name   = "Amazon Elastic Compute Cloud - Compute"
      monthly_budget = "3000"
    }
    rds = {
      service_name   = "Amazon Relational Database Service"
      monthly_budget = "2000"
    }
    sagemaker = {
      service_name   = "Amazon SageMaker"
      monthly_budget = "1000"
    }
  }
}

resource "aws_budgets_budget" "per_service" {
  for_each = var.service_budgets

  name         = "service-${each.key}-monthly"
  budget_type  = "COST"
  limit_amount = each.value.monthly_budget
  limit_unit   = "USD"
  time_unit    = "MONTHLY"

  cost_filter {
    name   = "Service"
    values = [each.value.service_name]
  }

  notification {
    comparison_operator        = "GREATER_THAN"
    threshold                  = 80
    threshold_type             = "PERCENTAGE"
    notification_type          = "ACTUAL"
    subscriber_email_addresses = var.alert_emails
    subscriber_sns_topic_arns  = [aws_sns_topic.budget_alerts.arn]
  }

  notification {
    comparison_operator        = "GREATER_THAN"
    threshold                  = 100
    threshold_type             = "PERCENTAGE"
    notification_type          = "ACTUAL"
    subscriber_email_addresses = var.alert_emails
    subscriber_sns_topic_arns  = [aws_sns_topic.budget_alerts.arn]
  }

  notification {
    comparison_operator        = "GREATER_THAN"
    threshold                  = 90
    threshold_type             = "PERCENTAGE"
    notification_type          = "FORECASTED"
    subscriber_email_addresses = var.alert_emails
    subscriber_sns_topic_arns  = [aws_sns_topic.budget_alerts.arn]
  }
}

⚡ Quick Audit: Check Your Current Budget Coverage

# List ALL existing budgets on your account
aws budgets describe-budgets 
  --account-id $(aws sts get-caller-identity --query Account --output text) 
  --output table

# Quick check — do you have ANY budgets?
aws budgets describe-budgets 
  --account-id $(aws sts get-caller-identity --query Account --output text) 
  --query "Budgets | length(@)" 
  --output text

# Check anomaly detection monitors
aws ce get-anomaly-monitors --output table

If describe-budgets returns zero results — that account has zero cost guardrails. Fix it today. 🚨

💡 Architect Pro Tips

  • Always use FORECASTED alertsACTUAL alerts tell you money is already spent. FORECASTED alerts tell you money will be spent. The forecasted alert at 90% is the single most valuable budget notification you can deploy. Note: AWS needs ~5 weeks of data to generate forecasts.

  • Don’t over-alert — Alerts at 10%, 20%, 30%… creates noise and gets ignored. Stick to 3-4 strategic thresholds (50% info, 80% warning, 100% critical, 90% forecasted). Alert fatigue kills FinOps.

  • Billing data is delayed — AWS billing data updates up to 3 times per day, and there can be 24+ hours of lag. Set your budgets slightly BELOW your true pain threshold to account for this. If $1,000 is your real limit, set the budget to $800.

  • First 2 budgets are free — AWS gives you 2 free budget alerts per account. Each additional budget is ~$0.62/month. For most teams, you need 3-5 budgets (account-level + per-service). That’s roughly $2/month for complete cost visibility.

  • Budget Actions reset differently — IAM/SCP policy actions auto-reset at the start of the next budget period. Instance stop actions do NOT — stopped instances stay stopped. Plan your automation accordingly.

  • Combine Budget Actions with Organizations SCPs — For multi-account setups, Budget Actions can apply Service Control Policies from the management account to member accounts. This is the most powerful kill switch available — it can block ALL resource creation org-wide.

  • Use Cost Allocation Tags — Per-team budgets filtered by CostCenter or Team tags only work if resources are tagged. Enforce tagging with AWS Organizations Tag Policies first, then build tag-filtered budgets on top.

📊 Quick Reference: What to Deploy First

Layer Effort Cost Impact
Budget alerts (email) 5 min Free (first 2) Baseline visibility
Forecasted spend alert 2 min Free Early warning before overspend
SNS → Slack 20 min ~$0/month (free tier) Team-wide awareness
Cost Anomaly Detection 5 min Free ML-powered spike detection
Budget Actions (non-prod) 15 min Free Automatic cost cap
Multi-account budget map 10 min ~$0.62/budget/month Org-wide protection
Per-service budgets 10 min ~$0.62/budget/month Targeted monitoring

Start with budget alerts and anomaly detection. They’re both free, they take 10 minutes total, and they’re the single best thing you can do to avoid a surprise bill. 🎯

📊 TL;DR

Budget alerts           = FREE (first 2), takes 5 min, no excuse to skip
FORECASTED alerts       = warns you BEFORE you hit the limit (needs ~5 weeks data)
SNS → Slack             = real-time team visibility, pennies/month
Cost Anomaly Detection  = FREE, ML-powered, catches spikes budgets miss
Budget Actions          = native kill switch — deny policies or stop instances
for_each budgets        = one Terraform map protects every account
Billing delay           = up to 24hrs lag, so set budgets BELOW your true limit
No hard cap exists      = AWS will never stop charging you automatically

Bottom line: AWS will happily charge you $72K while you sleep. Budget alerts are free, take 5 minutes, and are the only thing standing between you and a career-ending invoice. Deploy them now. 🔥

Your dev account doesn’t have a budget alert yet, does it? Run aws budgets describe-budgets right now — if it returns zero, go deploy that aws_budgets_budget resource. It’s free and takes 5 minutes. Your future self (and your CFO) will thank you. 😀

Found this helpful? Follow for more AWS cost optimization with Terraform! 💬

The $30 AWS Bill I Didn’t Expect (And What It Taught Me)

One mistake many beginners make with AWS is assuming nothing will happen if they stop using it.

In 2023, I opened an AWS account and deployed a small React app just to explore the console. After that, I left the account untouched for months.

When I returned in 2024 to take cloud learning seriously, my free tier had already expired. I logged in and saw a $30 bill.

The amount wasn’t huge, but the lesson learned was important.

I didn’t intentionally leave anything running. I simply didn’t understand which AWS services continue to cost money when you don’t turn them off.

After reaching out to AWS support and explaining my situation, it was resolved. That experience changed how I approach cloud learning.

Cloud knowledge is not only about deploying services. It’s also about knowing what to stop, delete, and shut down.

This lesson came from my own early mistakes while learning AWS, and it’s one that many beginners don’t realize until they see a bill.

That’s why I started creating content to help beginners understand AWS billing, Free Tier limits, and simple cleanup habits that prevent surprise charges.

If you prefer videos, I’ve created guides on:

  • AWS Free Tier Explained: Services That Are Not Free

  • AWS Services You Should Delete After Every Lab

If you prefer reading, this article breaks it down clearly: The Hidden Challenges of Building with AWS

One habit that saves money: After every lab or test, spend five minutes checking EC2, RDS, Lambda, and S3 before logging out. That small habit can prevent surprise charges.

Key takeaway: In AWS, learning what to turn off is just as important as learning what to deploy. Always review and clean up your resources after every experiment.

Discussion question: Which AWS service do you always double-check before logging out, or which one worries you the most about unexpected costs?

Drop your answer in the comments.

If this helped clarify things, follow the newsletter and reshare so others can learn too.

Stay updated with my projects by following me on Twitter, LinkedIn, and GitHub.

Thank you.

Why I’m Stepping Away from DEV (For Now)

This is one of those posts that feels strange to write, mostly because DEV has been such a positive constant for me.

Writing here played a huge role in my learning journey. Putting my thoughts into words helped me solidify what I was learning, track my progress, and build confidence in areas that once felt fuzzy. DEV gave me a place to think out loud, consistently, and that alone made a big difference in how I grew as a developer.

That experience has been genuinely valuable to me.

Just as important as the writing has been the people. I’ve made connections here that I appreciate a lot, and I’m grateful for the continued support and encouragement I’ve received over time. Being part of this community – and later becoming a Trusted Member – meant something to me, and still does.

So this isn’t a post coming from frustration or disappointment.

When Something Good Becomes Too Much

I’m neurodivergent, and one thing I’ve learned about myself is that when something works well for my brain, it can very easily take up more space than I intend.

DEV slowly shifted from being a place I wrote on to something I was thinking about constantly. What I might write next, how involved I should be, whether I was doing enough. That sense of involvement wasn’t imposed on me – it came from caring and wanting to show up properly.

But over time, that level of mental engagement became distracting in ways I can’t really ignore anymore.

Nothing bad happened. Nothing changed dramatically. My relationship with the platform did.

Choosing to Step Back

Stepping away isn’t about closing a chapter with negativity attached to it. It’s about recognising that my attention is limited, and that right now I need to protect it more intentionally.

DEV helped make my learning journey a successful one. The writing, the consistency, and the sense of community all contributed to that. I’m thankful for it, and I don’t see that changing just because I’m choosing to step back.

Thank You

I want to say thank you to everyone who’s supported me here, read my posts, or connected with me along the way. I appreciate it more than I can easily put into words.

This isn’t a dramatic goodbye.
Just a thoughtful pause.

The Evolution of Async Rust: From Tokio to High-Level Applications

Disclaimer: This article was created using AI-based writing and communication companions. With its help, the core topics of this rich and nuanced livestream were conveniently distilled into a compact blog post format.

In our yet another JetBrains livestream, Vitaly Bragilevsky was joined by Carl Lerche, the creator of Tokio, for an in-depth conversation about the evolution of async Rust. Tokio has become the de facto asynchronous runtime for high-performance networking in Rust, powering everything from backend services to databases. During the discussion, they explored how async Rust matured over the years, the architectural decisions behind Tokio, common challenges developers face today, and where the ecosystem is heading next. If you missed the live session, you can watch the full recording on JetBrains TV. Below, you’ll find a structured recap of the key questions and insights from the conversation.

Q1. What is TokioConf and why did you decide to organize it?

TokioConf is the first conference dedicated to the Tokio ecosystem, taking place in Portland, Oregon. This year marks ten years since Tokio was first announced, making it a natural moment to bring the community together. Use the code jetbrains10 for 10% off the general admission ticket (excluding any add-ons).

Buy TokioConf ticket

Tokio and Rust have become foundational technologies for infrastructure-level networking software, including databases and proxies. The conference is meant to reflect that maturity and growth. While the name highlights Tokio, the scope includes broader async and networking topics in Rust.

“Tokio and Rust have become one of the default ways companies build infrastructure-level networking software these days.”

Q2. When people hear “Async Rust,” what should they picture?

Async Rust is about more than performance. While handling high concurrency is a key advantage, async programming also improves how developers structure event-driven systems.

Timeouts, cancellation, and managing multiple in-flight tasks become significantly easier in async Rust compared to traditional threaded approaches. Async in Rust leverages the ownership model and Drop, enabling safe and clean cancellation patterns.

“Async is both performance, but also a way of managing lots of in-flight threads of logic well.”

Q3. How did Tokio begin? Why did Rust need it?

Tokio evolved from earlier experimentation with non-blocking I/O in Rust. Initially, Rust only had blocking socket APIs, and building efficient network systems required low-level abstractions. The journey went from Mio (epoll bindings), to the Future trait, and to async/await. Async/await was a major milestone in making async programming ergonomic in Rust.

“The way async/await ended up being designed is actually quite impressive.”

The language team managed to deliver memory safety and zero-cost abstractions in a way that wasn’t obvious at the time.

Q4. Could Rust have something like Java’s virtual threads?

Rust originally had green threads and coroutines before version 1.0, but they were removed to preserve zero-cost abstractions and C-level performance characteristics. The overhead and complexity of stack management for green threads conflicted with Rust’s design goals at the time.

“Rust actually started with lightweight virtual threads and coroutines.”

Whether such a feature could return is an open question, but today’s Rust async model is fundamentally different.

Q5. How does cancellation work in Async Rust?

Cancellation in Rust is implemented through Drop. When you drop a future, its cleanup logic runs automatically.

If the future directly owns a socket, it closes immediately. If the socket is owned by another task (for example in Hyper), cancellation signals cascade through channels and trigger cleanup.

However, async functions can be dropped at any point, and developers must write defensively to handle that reality correctly.

Q6. Why did Tokio become the dominant async runtime?

Tokio became the de facto standard largely due to ecosystem momentum. Early crates like Hyper built on Tokio, and once that foundation solidified, switching runtimes required compelling reasons.

Other runtimes exist (especially for embedded or specialized contexts) but for general server-side development, Tokio’s ecosystem depth made it the default.

“There just wasn’t a good reason to not use Tokio.”

Q7. What about io_uring? Is it the future?

io_uring can provide benefits, especially for batching filesystem operations. However, for networking workloads, real-world gains are often limited. It is more complex than epoll and has historically had more security issues. That said, Tokio allows mixing in io_uring-specific crates when you have a clear use case.

“I’ve not seen real performance benefits with swapping out io_uring for sockets under the hood in Tokio.”

Q8. What were the most important design decisions in Tokio?

Tokio intentionally avoided reinventing scheduling patterns. Instead, it adopted proven strategies from Go and Erlang, including work-stealing schedulers.

The philosophy was to provide:

  • Good defaults,
  • Strong performance,
  • Escape hatches for advanced tuning.

The goal was to make Tokio easy enough for most developers while still enabling performance optimization when needed.

Q9. What are common mistakes in Async Rust?

The biggest issue comes from cooperative scheduling. Tasks only yield at .await, so long CPU-heavy work without awaiting can stall the runtime. Tokio provides runtime metrics to help detect such problems. Understanding how the scheduler works is crucial to avoiding tail-latency problems.

“Because async is cooperative scheduling, you have to make sure you’re yielding back to the runtime regularly enough.”

Q10. What’s the best way to debug Async Rust?

Debugging async systems often involves:

  • Tracing,
  • Runtime metrics,
  • Async backtraces,
  • Traditional debuggers.

Stuck tasks and high tail latency remain the hardest issues to diagnose. Better static analysis and linting tools could significantly improve this area in the future.

“The biggest pitfall stems down to developers accidentally canceling something and not handling the cancellation appropriately.”

Q11. What is Toasty, and why are you building it?

Rust has matured as a systems and infrastructure language, but higher-level web application tooling remains underdeveloped. Toasty aims to explore that space by building a productive, ergonomic data modeling and query layer. The goal is not just performance, but developer ergonomics – while still preserving escape hatches for advanced use cases.

Q12. Can Rust move into high-level web frameworks?

Rust already has a foothold in many organizations thanks to its infrastructure strengths. As internal Rust ecosystems grow, the demand for higher-level tooling increases. The missing piece is ergonomic, opinionated frameworks that prioritize productivity. The long-term vision is not to replace existing ecosystems, but to expand Rust’s reach upward into full-stack development.

“I do think there’s a way to build productive and ergonomic libraries with Rust that focus on ease of use.”

Closing Thoughts

Rust has firmly established itself as the best choice for many infrastructure-level systems. The next frontier is higher-level application development. Tokio solved async infrastructure and now the ecosystem is evolving toward productivity and full-stack capability.

If you’re interested:

  • Explore Toasty on the Tokio GitHub
  • Join the Tokio Discord
  • Attend TokioConf in Portland, Oregon

Watch our previous lifestream with Herbert Wolverson and explore everything you wanted to ask about Rust

Editor Improvements: Smooth Caret Animation and New Selection Behavior

We’re continuing to modernize our IDEs, and in this update we’ve refreshed something you interact with constantly – the editor. These changes are designed to provide improved comfort, a cleaner look, and a more enjoyable experience during the hours you spend coding. We want to make the editor easier on the eyes, help you stay productive, and add a bit of fun variety.

What’s new

  • New selection behavior: Selection now only highlights the actual text, not the blank space at the end of a line.
  • New smooth caret movement: A new animation makes caret jumps easier to follow. This is a long-awaited request from our users.
  • Smooth blinking and rounded caret: The editor now has a more modern look and feel, matching the recently introduced Islands UI theme.

New selection behavior

When working with code, you often have to select multiple lines at once, and it is important to be able to see what is included in the selection. The editor now highlights only the characters you actually selected, rather than the entirety of each line. This reduces the amount of blank blue space, making the selection itself clearer. It also removes ambiguity, letting you see exactly which characters are selected.

New smooth caret movement

This one has been on our wishlist for a long time. Like us, you work in the text editor every day, constantly navigating and modifying code. One way we’re making this experience more enjoyable is by introducing smooth, animated caret movement. You might not realize you need this until you try it. It enhances comfort and helps you stay oriented in your work, making the editor feel both more functional and visually pleasing.

At the same time, we recognize that excessive caret animations may give the impression of a delay when you’re typing and feel slow and unresponsive. We believe that typing must feel instantaneous, and we wanted to make sure to get it right. It is a significant challenge to introduce animations without breaking habits.

That’s why we built our own mode for caret movement – Snappy – which you won’t find in other editors. In this mode, the caret lands almost immediately where it should be and then settles with a smooth stop. The result feels quick yet smooth.

If you prefer clearly visible motion, there’s also Gliding mode, where the caret moves smoothly, making jumps easy to follow with your eyes. This option is similar to what you see in other popular text editors.

You can switch between movement modes using the Use smooth caret movement dropdown in Settings | Editor | General | Appearance. You can revert to the old behavior simply by disabling it.

Smooth blinking and rounded caret

We’ve also given the caret a new look. Its smooth blinking feels calmer and more modern. As a final touch, we rounded it to match rounded elements throughout the IDE in the Islands UI.

How to try it

These new features are all available in the latest early access builds of JetBrains IDEs, meaning there’s still time for us to adjust them based on your feedback. Give them a try and let us know what you think!

Try them in the IntelliJ IDEA EAP today

Kodee’s Kotlin Roundup: KotlinConf ’26 Updates, New Releases, and More

KotlinConf 2026 is starting to take shape, and there’s a lot happening across the Kotlin ecosystem right now. From the first conference speakers and community awards to new releases, tools, and real-world Kotlin stories at serious scale, I’ve gathered all the highlights you won’t want to miss. Let’s dive in!

From certifed Kotlin trainers

  • How Backend Development Teams Use Kotlin in 2025
  • How Mobile Development Teams Use Kotlin in 2025

Where you can learn more

  • Industry Leaders on the KotlinConf’25 Stage: What Global Brands Built With Kotlin
  • Advent of Code 2025 in Kotlin: Puzzles, Prizes, and Community
  • Pick Your KotlinConf Workshop by What You Want to Learn
  • Exposed 1.0 Is Now Available
  • Kubernetes Made Simple: A Guide for JVM Developers
  • A Better Way to Explore kotlinx-benchmark Results with Kotlin Notebooks

YouTube highlights

  • API Design at Google: Building Android Libraries
  • Talking Kotlin #144 | Kotlin 2.3 Release Special (Audio Only)
  • Making Apps Accessible With Kotlin and Compose
  • Sell or Buy? Custom Financial Data Visualisation With Kotlin
  • Language Design in the Age of AI
  • Why iOS Devs Struggle With KMP (and How to Fix It)
  • A Better Way to Explore and Solve Programming Puzzles: Kotlin Notebooks
  • Why ING Chooses Kotlin for Server-Side

Databao Becomes a Partner of the Open Semantic Interchange Initiative led by Snowflake and other industry leaders

Modern data teams need flexibility and scalability as workflows evolve and AI becomes central to analytics. They are increasingly relying on AI to enable self-service analytics and accelerate data workflows, and it has become essential to establish shared business logic and context that both humans and AI systems can reliably understand and use. That shared context is the semantic layer.

That’s why we are thrilled to announce that Databao, a recent JetBrains product bringing semantic context and local data agents to data teams, is joining the Open Semantic Interchange (OSI), the open-source initiative led by Snowflake and other industry leaders to advance interoperable and governed semantic models.

Why open semantics matter

The Open Semantic Interchange (OSI) is an open-source initiative led by Snowflake and a broad ecosystem of partners across multiple domains and industries including BI, data engineering, governance, and AI. Its goal is to define a shared, vendor-neutral standard for semantic metadata, enabling it to move seamlessly across tools and platforms.

By making semantic context portable and interoperable, OSI reduces complexity, accelerates the adoption of AI and analytics tools, and helps organizations align on consistent data definitions, laying the foundation for more reliable insights and scalable AI innovation.

What Databao brings to data teams

OSI reflects the principles that have guided Databao from the start.

First, trust at scale depends on semantics. Databao treats the semantic layer as living context, shared, governed, and continuously refined, so business logic remains clear and reliable as data usage grows.

Second, adoption requires flexibility. Teams shouldn’t have to choose between strong governance and usability. Databao already works with existing tools and logic. Teams stay in control of where their source of truth lives and how it’s maintained.

By joining OSI, Databao reinforces its commitment to an open, community-driven approach to sharing semantic models, so business definitions remain consistent and usable across every team’s workflows.

About Databao

Databao is a new data product from JetBrains that helps data teams create and maintain a shared semantic context and build their own data agents on top of it. Our goal is to provide an AI-native analytics experience that business users can trust, enabling them to query and analyze data in plain language.

Databao’s modular components, the context engine and data agent can run independently, either locally or within your existing infrastructure using your own API keys.

We are also inviting data teams to build a proof of concept with us: we’ll explore your use case, define a context-building process, and grant agent access to a selected group of business users. Together, we will then evaluate the quality of responses and the overall value.

TALK TO THE TEAM

Learn more about the Open Semantic Interchange on Snowflake’s blog, or explore both modules at databao.app .