Hashtag Jakarta EE #323

Hashtag Jakarta EE #323

Welcome to issue number three hundred and twenty-three of Hashtag Jakarta EE!

Right now, I am on my way home from Devnexus 2026. It was as busy as always, so I haven’t been able to finish up my post from the event, but I promise that it will be out there shortly. I will only have 24 hours at home this time before I am headed to JavaLand 2026. They have changed venue again and is going to be in Europa-Park in southern Germany. Hopefully this change of venue will be more successful than the last couple of editions at Nürburgring.

Another thing that hapened last week was that I became an IBM Champion. It makes sense since I am often using IBM technology such as Open Liberty in my demos at conferences around the World. Since I am new to the program, I don’t really know what it will mean, but I am excited to find outl.

The work on Jakarta EE 12 continues. In the weekly platform call, the progress is discussed and the projects for the individual component specifications report on what they are focusing at the moment. Due to inactivity in some of the projects, and recent layoffs among our member companies, the team is discussing how to bring on new committers and get them up to speed as fast as possible.

If you are still waiting for the follow-up post from my Will AI Kill Open Source post a couple of weeks ago, don’t despair. I have so much material for the next post and just need a little breathing room to organise my thoughts. In the context of this theme, I have done some pretty cool experiments that I am very eager to share once they are in a sharable state. Stay tuned…

Ivar Grimstad


191 Demos. 0 Signups. Then We Removed the Telegram Requirement.

Last week we wrote about why 191 developers tried our AI messenger and nobody signed up. Our hypothesis: the value proposition wasn’t clear — developers saw a chat interface but didn’t understand why it was different from any other chatbot.

We were half right.

What We Missed

While analyzing the funnel, we found a second friction point we’d overlooked: authentication was Telegram-only.

To sign up for Agenium Messenger, you had to:

  1. Have a Telegram account
  2. Authorize our bot
  3. Complete a multi-step mobile polling flow

For developers building on Google A2A, MCP, or custom agent stacks — many of whom don’t use Telegram daily — this was a wall. Not an insurmountable one, but enough friction to produce a predictable result: try the demo as a guest, see something mildly interesting, leave.

We shipped email magic link authentication this week. No Telegram required. One email address, one click, done.

Why Email Matters More for Agents Than for Humans

Here’s the insight that surprised us.

For human apps, email vs. Telegram vs. GitHub OAuth is mostly UX preference. You’re logging a human in.

For agent infrastructure, email is different. It’s not just a login method — it’s an identity anchor.

An AI agent needs an address that:

  • Is portable across model upgrades (the agent is still “the same agent” when you switch from GPT-4 to Claude)
  • Is resolvable by other agents that have never met it before
  • Carries trust signals — DMARC records, sending reputation, DKIM signing — that don’t exist with arbitrary auth tokens

Email has decades of trust infrastructure baked in. When agent@yourdomain.com sends a message, there’s a verifiable chain of custody (DNS records, signing keys, bounce history) that a randomly-generated OAuth token simply doesn’t have.

We didn’t ship email auth just to reduce friction. We shipped it because email is a better identity primitive for the agent web than anything else that currently exists at scale.

What the Numbers Show

Before email auth (first 10 days):

  • 191 demo sessions started
  • 0 signups
  • Auth method available: Telegram only

After email auth (first 48 hours):

  • Email auth live at chat.agenium.net
  • Guest mode: one click, no auth required
  • Email option: magic link, no password

It’s too early to report a conversion number. But we can report that two things are now true that weren’t before:

  1. A developer who has never opened Telegram can sign up in under 30 seconds
  2. The agent address they get (username@agenium.net) is email-anchored and resolvable

The Discovery Layer Connection

This matters for the broader A2A/MCP ecosystem because agent discovery has a chicken-and-egg problem with identity.

When Agent A wants to find Agent B in a registry, it needs to look up something stable — an address that will still be valid six months from now, after deployments, model upgrades, and server migrations.

agent://username.agenium.net solves this. The addressing layer is separate from the compute layer. You can swap the model underneath without changing the address that other agents use to find you.

Email is the most natural way to bootstrap that address:

  • Humans already have email identity
  • Email domains are owned and verified
  • Email-based authentication creates an implicit link between a human (or organization) and their agent’s public address

What We’re Shipping Next

The current state of Agenium Messenger:

  • ✅ Guest mode (no account, instant demo)
  • ✅ Email magic link auth
  • ✅ Telegram Login (for TG-native users)
  • ⏳ GitHub OAuth (credentials being configured)
  • ✅ Stable agent:// addresses
  • ✅ A2A-compatible Agent Cards

We’re building toward one goal: every AI agent deserves a stable, resolvable address that persists across the agent’s lifetime — not its deployment’s lifetime.

Try it: chat.agenium.net

No Telegram required.

Building Agenium in public — the DNS layer for the agent web. Follow along at @AgeniumPlatform.

“How to Fix Claude Code’s Broken Permissions (With Hooks)”

Claude Code’s permission system has a problem. If you’ve ever set up careful allow/deny rules in settings.json and still gotten prompted for commands that should match -you’re not alone.

Issue #30519 documents the core problems:

  • Wildcards don’t match compound commands. Bash(git:*) doesn’t match git add file && git commit -m "message". Claude generates compound commands constantly.
  • “Always Allow” saves dead rules. Click “Always Allow” on git commit -m "fix typo" and it saves that exact string. Never matches again.
  • User-level settings don’t apply at project level. Rules in ~/.claude/settings.json show up in /permissions but don’t match.
  • Deny rules have the same bugs. Multiline commands bypass deny rules too.

There are 30+ open issues about permission matching. The community is building workarounds. Here’s the one that works: move enforcement from permissions to hooks.

The Core Insight

Permissions are a request to the system. Hooks are enforcement.

A PreToolUse hook runs before every tool call. It sees the full command string -including compound commands, pipes, and subshells. It can block anything, suggest alternatives, and it works regardless of permission matching bugs.

What This Looks Like in Practice

Block Destructive Git Operations

Create ~/.claude/hooks/git-safe.sh:

#!/bin/bash
# Reads tool_name and tool_input from Claude Code hook protocol
INPUT=$(cat)
TOOL=$(echo "$INPUT" | jq -r '.tool_name // empty')
[ "$TOOL" != "Bash" ] && exit 0

CMD=$(echo "$INPUT" | jq -r '.tool_input.command // empty')

# Check for destructive git commands -works in compound commands too
if echo "$CMD" | grep -qE 'gits+pushs+.*--force|gits+resets+--hard|gits+checkouts+.|gits+cleans+-f'; then
  echo '{"decision":"block","reason":"Blocked by git-safe hook. Use safer alternatives: git push --force-with-lease, git stash, git checkout <specific-file>."}'
  exit 0
fi

The key difference from permissions: grep -E matches anywhere in the command string. cd repo && git push --force origin main gets caught. Permission wildcards miss this.

Block Dangerous Bash Commands

Same pattern for system-level threats:

if echo "$CMD" | grep -qE 'rms+-rfs+/|sudos|chmods+-Rs+777|curl.*|s*bash'; then
  echo '{"decision":"block","reason":"Blocked by bash-guard. This command could damage your system."}'
  exit 0
fi

Protect Specific Files

For .env, credentials, production configs:

TOOL=$(echo "$INPUT" | jq -r '.tool_name // empty')
FILE=$(echo "$INPUT" | jq -r '.tool_input.file_path // .tool_input.command // empty')

# Check against patterns in .file-guard
if [ -f ".file-guard" ]; then
  while IFS= read -r pattern; do
    [[ "$pattern" =~ ^[[:space:]]*$ || "$pattern" =~ ^# ]] && continue
    if [[ "$FILE" == *"$pattern"* ]]; then
      echo "{"decision":"block","reason":"Protected by file-guard: $pattern"}"
      exit 0
    fi
  done < .file-guard
fi

Register the Hooks

Add to ~/.claude/settings.json:

{
  "hooks": {
    "PreToolUse": [
      {
        "matcher": "Bash",
        "hooks": [
          {"type": "command", "command": "bash ~/.claude/hooks/git-safe.sh"},
          {"type": "command", "command": "bash ~/.claude/hooks/bash-guard.sh"}
        ]
      },
      {
        "matcher": "*",
        "hooks": [
          {"type": "command", "command": "bash ~/.claude/hooks/file-guard.sh"}
        ]
      }
    ]
  }
}

Pre-Built Hooks

I maintain tested versions of all these hooks with per-project allowlists, edge case handling, and safer-alternative suggestions:

  • git-safe -45 tests
  • bash-guard -40 tests
  • file-guard -27 tests
  • branch-guard -35 tests

Install all at once:

curl -fsSL https://raw.githubusercontent.com/Bande-a-Bonnot/Boucle-framework/main/tools/install.sh | bash -s -- all

Or check your current setup first:

curl -fsSL https://raw.githubusercontent.com/Bande-a-Bonnot/Boucle-framework/main/tools/safety-check/check.sh | bash

Why Hooks Beat Permissions for Safety

Permissions Hooks
Compound commands Broken (#25441) Full regex matching
Per-project config Inconsistent (#5140) Config files in project root
Deny enforcement Bypassable (#18613) Runs before tool execution
“Always Allow” drift Saves exact strings (#6850) Pattern-based, no drift
Custom logic Not supported Any bash/python script

Permissions are great for convenience (“don’t ask me about git add”). Hooks are for safety (“never force push, no matter what”).

Use both: permissions for workflow, hooks for enforcement.

🚀 DevOrch: The Multi-Provider AI Coding CLI

Initial view

As a developer, I love working in the terminal. Fast, keyboard-driven, and distraction-free. But when it comes to AI coding assistants, most tools force you to switch UIs — web apps, IDE plugins, or proprietary apps.

So I built DevOrch, a Python CLI tool that unifies multiple AI providers in one place. Think of it as your personal terminal AI workstation.

IMage when we use and it asks for permissions

🔹 What is DevOrch?

DevOrch is a command-line AI assistant that supports 15+ AI providers, including:

  • OpenAI (GPT‑4, GPT‑4o)
  • Anthropic (Claude)
  • Google Gemini
  • Mistral, Groq, Together AI, Copilot, Ollama, LM Studio, and more

All from a single CLI. No switching between websites or IDE plugins.

🔹 Key Features

  • Multi-provider support: Switch between AI providers seamlessly.
  • Interactive modes:

    • ask – get instant answers or code suggestions
    • plan – generate project plans or steps
    • auto – let DevOrch take actions across files or commands
  • Session persistence: Keep conversations alive between CLI sessions.

  • Secure API key storage: No more pasting keys every time.

  • Terminal-first experience: Fast, distraction-free, works entirely from the command line.

🔹 Why DevOrch?

  • For terminal lovers: If you live in Vim, Tmux, or your shell, this tool fits naturally.
  • Extensible: Adding new providers is easy. DevOrch was built to grow.
  • Portable: Install it with one command:
pip install devorch

🔹 Quick Demo

# Start DevOrch
devorch

# Ask a coding question
ask "How do I implement a binary search in Python?"

# Plan a project
plan "Create a CLI tool to automate file backups"

# Auto mode for actions
auto "Update requirements and push changes to GitHub"

DevOrch will respond directly in your terminal, with clear outputs and suggestions.

🔹 Installation

pip install devorch

💡 Optional: Use pipx for isolated CLI installation:

pipx install devorch

🔹 Community & Feedback

DevOrch is open-source: GitHub Repository
If you try it out, please star the repo ⭐, give feedback, or suggest new providers.

🔹 Stats & Early Adoption

Even in its first week, DevOrch has been downloaded 200+ times on PyPI — and the numbers are growing daily.

🔹 Next Steps

  • Add more AI providers
  • Improve async handling for faster responses
  • Integrate tooling like git, Docker, and linters
  • Add a plugin system so the community can extend DevOrch easily

✅ Try it Today

If you love CLI workflows + AI coding assistance, DevOrch is your tool. Install it, and turn your terminal into a full-fledged AI coding assistant.

pip install devorch
devorch

🚀 FreelanceOS

🚀 FreelanceOS — AI-Powered Operating System for Freelancers

What I Built

FreelanceOS is a complete AI-powered operating system for freelancers and solopreneurs, built entirely on Notion MCP + Google Gemini AI.

Freelancers waste 5–10 hours every week on admin work that doesn’t pay — writing contracts, creating invoices, sending client update emails, and chasing unpaid payments. FreelanceOS eliminates all of that.

You type a few words. FreelanceOS generates a professional AI-written contract, invoice, or client email — and saves it directly into your Notion workspace automatically.

The Problem It Solves

Admin Task Time Wasted Per Week
Writing freelance contracts 1–2 hours
Creating & formatting invoices 30–60 mins
Writing client update emails 20–30 mins
Tracking unpaid invoices Hours per month
Managing clients & projects across tools Daily friction

FreelanceOS collapses all of this into one AI-powered Notion workspace.

✨ Core Features

📊 AI Dashboard
Pulls live data from all 5 Notion databases and feeds it to Gemini AI, which analyzes your portfolio and gives you personalized business insights — total revenue potential, overdue projects, workload balance, and 3 actionable recommendations.

AI Dashboard

📄 AI Contract Generator
Enter client name, project description, budget, and deadline. FreelanceOS generates a complete professional freelance contract with scope, payment terms, revision policy, ownership rights, and termination clause — saved instantly to your Notion Contracts database.

Contract Generator

🧾 AI Invoice Generator
Enter client name, amount, and work done. FreelanceOS generates a professional itemized invoice with payment instructions and due dates — saved to your Notion Invoices database as “Unpaid” and tracked automatically.

Invoice Generator

👥 Client & Project Management
Full CRUD operations on Clients and Projects — all stored and managed through Notion MCP.

Add User

Add Project

🚪 Clean Exit

Exit Screen

🗺️ System Architecture

User Input (CLI)
      │
      ▼
FreelanceOS (Python)
      │
      ├──▶ Google Gemini AI ──▶ AI-Generated Content
      │                               │
      └──▶ Notion MCP API ◀───────────┘
                │
                ▼
        Notion Workspace
    ┌──────────────────────┐
    │  Clients   Projects  │
    │  Invoices  Contracts │
    │  Expenses            │
    └──────────────────────┘

Show us the code

🔗 GitHub Repository: github.com/SimranShaikh20/FreelanceOS

Project Structure

freelance-os/
│
├── main.py                 ← Entry point
├── notion_helper.py        ← All Notion MCP API calls
├── ai_helper.py            ← All Gemini AI calls
├── requirements.txt
├── .env.example
│
└── features/
    ├── dashboard.py        ← AI-powered insights
    ├── clients.py          ← Client management
    ├── projects.py         ← Project tracking
    ├── contracts.py        ← AI contract generation
    ├── invoices.py         ← AI invoice generation
    └── emails.py           ← AI email generation

Key Code Snippets

AI Contract Generation:

def generate_contract(client_name, project_desc, budget, deadline):
    prompt = f"""
    Write a professional freelance contract:
    Client: {client_name}
    Project: {project_desc}
    Budget: ${budget}
    Deadline: {deadline}
    Include: scope, payment terms, revision policy,
    ownership rights, termination clause
    """
    return generate_text(prompt)

Saving to Notion MCP:

def add_contract(client_name, project_desc, budget, content):
    db_id = os.getenv("CONTRACTS_DB_ID")
    data = {
        "parent": {"database_id": db_id},
        "properties": {
            "Name": {"title": [{"text": {"content": f"Contract - {client_name}"}}]},
            "Client": {"rich_text": [{"text": {"content": client_name}}]},
            "Budget": {"number": float(budget)},
            "Content": {"rich_text": [{"text": {"content": content[:2000]}}]},
            "Status": {"multi_select": [{"name": "Draft"}]}
        }
    }
    result = notion_post("pages", data)

AI Dashboard Insights:

def generate_project_summary(projects):
    project_list = "n".join([
        f"- {p['name']}: ${p['budget']} ({p['status']})" 
        for p in projects
    ])
    prompt = f"""
    Analyze these freelance projects:
    {project_list}
    Give: revenue potential, attention needed,
    workload assessment, 3 recommendations.
    """
    return generate_text(prompt)

How I Used Notion MCP

Notion MCP is not just a storage layer in FreelanceOS — it IS the operating system.

The Integration

FreelanceOS uses Notion MCP as its single source of truth across 5 databases:

Notion Database What FreelanceOS Stores
Clients Name, email, active/inactive status
Projects Name, budget, deadline, progress status
Invoices AI-generated invoice content, amount, paid/unpaid
Contracts Full AI-generated contract text, draft/signed status
Expenses Category, amount, date for tax tracking

What Notion MCP Unlocks

1. Real-time AI + Notion sync
Every AI-generated document (contract, invoice) is immediately written to the correct Notion database via the MCP API. No copy-paste. No manual entry.

2. Live business intelligence
The Dashboard pulls live data from all 5 Notion databases simultaneously, feeds it to Gemini AI, and returns intelligent business insights about your freelance operation — all in real time.

3. Persistent workflow memory
Because everything lives in Notion, your freelance OS remembers every client, project, invoice, and contract across sessions. Notion MCP turns a Python script into a stateful business operating system.

4. Human-in-the-loop control
Every AI-generated output is reviewed by the freelancer before saving to Notion. The human stays in control — AI handles the generation, Notion handles the storage, the freelancer makes the final call.

🛠️ Tech Stack

  • Notion MCP — Core workspace and data layer
  • Google Gemini 1.5 Flash — AI generation (free tier)
  • Python 3 — Application logic
  • Rich — Beautiful terminal UI
  • Requests — Notion API HTTP client

🚀 Try It Yourself

git clone https://github.com/SimranShaikh20/FreelanceOS
cd FreelanceOS
pip install -r requirements.txt
# Add your API keys to .env
python main.py

Full setup guide in the README.

JSON Formatter CLI – Format, Validate, and Analyze JSON in Seconds

JSON Formatter CLI — Format, Validate, and Analyze JSON in Seconds

You get API responses. They’re minified. Unreadable.

You have config files. Keys are unsorted. Inconsistent.

You’re debugging. You need to validate JSON structure.

Stop copying to online tools. Stop installing npm packages.

I built a zero-dependency JSON formatter that does it all in one command.

The Problem

Developers work with JSON constantly. But:

  • API responses are minified (hard to read)
  • Config files are inconsistent (multiple formats)
  • Validation errors don’t show the structure
  • Online tools are slow and privacy-invasive
  • npm packages require 100+ dependencies

You need a simple, local, fast solution.

The Solution

python json_formatter.py data.json

Pretty-printed, readable JSON. Takes 10ms.

python json_formatter.py data.json --sort

All keys alphabetically sorted. Clean.

python json_formatter.py data.json --minify

Minified for production. 40% smaller.

python json_formatter.py data.json --stats

Understand the structure instantly.

Key Features

✅ Multiple Formats

Pretty Print (readable, development)

python json_formatter.py data.json

Output:

{
  "name": "John",
  "age": 30,
  "city": "NYC"
}

Minified (compact, production)

python json_formatter.py data.json --minify

Output: {"name":"John","age":30,"city":"NYC"}

Sorted (consistent, for version control)

python json_formatter.py data.json --sort

Output:

{
  "age": 30,
  "city": "NYC",
  "name": "John"
}

Compact (balanced, for sharing)

python json_formatter.py data.json --compact

✅ Validation & Analysis

Validate

python json_formatter.py data.json --validate

Output: ✓ Valid JSON

Statistics

python json_formatter.py data.json --stats

Output:

JSON Statistics:
  Structure: Object
  Objects: 15
  Arrays: 8
  Strings: 32
  Numbers: 12
  Top keys: name, email, id, ...

✅ Batch Processing

Process multiple files:

for f in data/*.json; do
    python json_formatter.py "$f" --sort -o "$f"
done

✅ Production Ready

  • Zero dependencies
  • < 100ms for most files
  • Handles deeply nested structures
  • UTF-8 encoding support

Real-World Examples

Example 1: Debug API Response

Your API endpoint returns minified JSON. Debug it:

curl https://api.example.com/users | python json_formatter.py /dev/stdin

Before:

{"status":"success","data":{"users":[{"id":1,"name":"Alice","email":"alice@example.com"},{"id":2,"name":"Bob","email":"bob@example.com"}]},"timestamp":"2024-01-15T10:30:00Z"}

After:

{
  "status": "success",
  "data": {
    "users": [
      {
        "id": 1,
        "name": "Alice",
        "email": "alice@example.com"
      },
      {
        "id": 2,
        "name": "Bob",
        "email": "bob@example.com"
      }
    ]
  },
  "timestamp": "2024-01-15T10:30:00Z"
}

Now you can see the structure instantly.

Example 2: Standardize Config Files

Your Kubernetes config files have inconsistent formatting:

# Standardize all configs
for config in *.json; do
    python json_formatter.py "$config" --sort -o "$config"
done

git add *.json
git commit -m "Standardize JSON format"

Benefits:

  • Consistent formatting across team
  • Easier diffs in version control
  • No merge conflicts from formatting

Example 3: Data Pipeline Optimization

Processing large JSON files:

# Read pretty version (development)
python json_formatter.py raw-data.json

# Process and convert
python processor.py raw-data.json

# Output minified (production)
python json_formatter.py output.json --minify -o api-response.json

Saves 40-60% bandwidth on APIs!

Example 4: Quick Validation

Before importing to database, validate JSON:

python json_formatter.py import.json --validate

# Batch validate
for f in imports/*.json; do
    python json_formatter.py "$f" --validate || echo "Invalid: $f"
done

Performance Comparison

Tool Speed Dependency Setup
This tool 10-50ms None 2 min
jq 50-100ms Binary 10 min
npm (prettier) 200ms+ 100+ packages 5 min
Online tools 1000ms+ Cloud 1 min
Python json lib Varies Requires Python script 5 min

Winner: This tool for 95% of use cases.

How It Works

Simple Python architecture:

class JSONFormatter:
    def format_pretty(data, indent=2):
        return json.dumps(data, indent=indent, sort_keys=False)

    def format_minified(data):
        return json.dumps(data, separators=(',', ':'))

    def get_stats(data):
        # Count objects, arrays, strings, etc.
        # Identify top keys
        return statistics

No complex logic. Just Python’s built-in json module with smart wrapping.

Installation

Get it free on GitHub:
👉 github.com/devdattareddy/json-formatter-cli

git clone https://github.com/devdattareddy/json-formatter-cli
cd json-formatter-cli

# Run
python json_formatter.py data.json

No installation. No pip. Just run it.

Use Cases

🔧 API Development – Debug responses

🗄️ DevOps – Validate configs

📊 Data Engineering – Process JSON pipelines

🌐 Web Development – Format data files

🔍 Debugging – Understand structure

Common Workflows

API Debugging Workflow

# Get API response
curl https://api.example.com/data > response.json

# Format it
python json_formatter.py response.json

# Validate structure
python json_formatter.py response.json --stats

# Save pretty version
python json_formatter.py response.json -o pretty.json

Config File Workflow

# Check config
python json_formatter.py app-config.json --validate

# Standardize
python json_formatter.py app-config.json --sort -o app-config.json

# Deploy minified
python json_formatter.py app-config.json --minify -o app-config-prod.json

Data Analysis Workflow

# Analyze raw data
python json_formatter.py raw-data.json --stats

# Format for review
python json_formatter.py raw-data.json > formatted.json

# Process
python processor.py formatted.json

# Minify for export
python json_formatter.py output.json --minify -o output-min.json

Why I Built This

I was debugging API responses by copying to online formatters. Every response. Slow. Insecure.

Then I was standardizing JSON config files manually. Tedious.

Finally I built this: 200-line Python script that does all three.

Saves 2-3 hours per week.

Get Started

# Clone
git clone https://github.com/devdattareddy/json-formatter-cli

# Try it
echo '{"name":"John","age":30}' > test.json
python json_formatter.py test.json

Support This Project

If this tool saves you time:

🎉 Buy Me a Coffee – Help me build more tools

Star on GitHub – Help others find it

What’s your go-to JSON formatting tool? Let me know — I might add features you need!