Solved: What in the world would you call this…?

🚀 Executive Summary

TL;DR: A nested .git folder within a Git subdirectory creates ‘phantom submodule’ behavior, preventing the parent repository from tracking individual files and leading to deployment issues. This problem can be resolved by either removing the nested .git folder, formalizing it as a proper Git submodule, or performing a ‘scorched earth’ reset for a guaranteed clean state.

🎯 Key Takeaways

  • A ‘phantom submodule’ or ‘Git Nesting Doll’ occurs when a subdirectory contains its own .git folder, causing the parent repository to track it as an empty pointer instead of its actual files.
  • Git status will show ‘modified: (new commits)’ for the problematic directory, but files within it cannot be added or committed directly.
  • Solutions range from the quick fix of removing the nested .git folder (destructive to inner history) to formalizing it as a proper submodule (preserving history) or a ‘scorched earth’ reset for stubborn cases.

Struggling with a Git subdirectory that won’t track files? Learn why a nested .git folder creates ‘phantom submodule’ behavior and discover three battle-tested methods to fix it, from the quick-and-dirty to the permanent solution.

What in the World Would You Call This? Taming Git’s Phantom Submodules

I’ll never forget it. 3 AM, a Thursday morning, and a ‘critical’ hotfix deployment to production. All the CI checks were green, tests passed, the pipeline glowed with success. We hit the big red button. Ten seconds later, alarms blare. The application on prod-app-01 is crash-looping. The logs scream FileNotFoundException: /etc/app/config/prod-secrets.json. I SSH in, heart pounding, and navigate to the directory. It’s empty. The entire prod-secrets/ directory, which should have been full of config files, was just… gone. After a frantic half-hour, we found the culprit. A junior dev, trying to be helpful, had run git init inside that directory by mistake. Our parent repo saw it, shrugged, and just committed an empty pointer to it instead of the actual files. We’ve all been there, and that phantom commit cost us an hour of downtime and a lot of sweat.

So, What’s Actually Happening Here?

When you see this in your terminal, it’s Git trying to be smart, but in a way that’s incredibly confusing at first glance:

$ git status
On branch main
Your branch is up to date with 'origin/main'.

Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git restore <file>..." to discard changes in working directory)
        modified:   src/vendor/some-library (new commits)

no changes added to commit (use "git add" and/or "git commit -a")

You see modified: src/vendor/some-library, but you can’t add it, you can’t commit it, and Git won’t show you the files inside. This happens because the some-library directory contains its own .git folder. The parent repository sees that .git folder and says, “Whoa, that’s another repository’s territory. I’m not going to track its individual files. I’ll just track which commit that repository is on.”

It’s treating it like a submodule, but without the proper setup in your .gitmodules file. I call it a “Phantom Submodule” or a “Git Nesting Doll”. It’s a repository within a repository, and it’s a common headache.

Three Ways to Fix This Mess

Depending on your goal and how much you value the history within that nested repo, here are the three paths I usually take, from the quick-and-dirty to the architecturally sound.

Solution 1: The Quick Fix (Just Nuke the .git Folder)

This is the most common solution and, honestly, the one you’ll use 90% of the time. The problem is the nested .git directory. The solution? Get rid of it.

When to use this: You downloaded a library, cloned a project into another, or accidentally ran git init, and you do not care about the git history of the inner folder. You just want its files to be part of your main project.

  1. Navigate to your project’s root directory.
  2. Simply remove the .git directory from the subdirectory. Be careful with rm -rf!
# The path here is the subdirectory that Git is ignoring
rm -rf ./src/vendor/some-library/.git
  1. Now, run git status again. The “submodule” entry will be gone, and Git will suddenly see all the files in that directory as new, untracked files.
  2. Add them like you normally would.
git add src/vendor/some-library/
git commit -m "feat: Absorb some-library files into the main repo"

Warning: This is a destructive action for the nested repository. You are permanently deleting its commit history. If you might need that history, do not use this method. Proceed to Solution 2.

Solution 2: The ‘Right’ Way (Embrace the Submodule)

Sometimes, you want to keep the two projects separate. Maybe some-library is an open-source tool you use, and you want to be able to pull updates from its own remote. In this case, you should formalize the relationship by properly adding it as a submodule.

When to use this: The subdirectory is a legitimate, separate project that you want to link to your main project while keeping its history and identity intact.

  1. First, remove the “phantom” entry from Git’s index. We need it to stop tracking that path before we can re-add it properly.
# Note the trailing slash is important here
git rm --cached src/vendor/some-library
  1. Commit this removal to clean up the state.
git commit -m "chore: Remove incorrect submodule reference"
  1. Now, properly add the directory as a submodule. You’ll need the URL of its remote repository.
# git submodule add [repository_url] [path]
git submodule add https://github.com/some-user/some-library.git src/vendor/some-library

This creates a .gitmodules file and correctly registers the submodule. Now you can manage it properly, pulling updates and committing specific versions.

Solution 3: The ‘Scorched Earth’ Reset

I’ve seen situations where the Git index gets so confused that the above methods don’t work cleanly. This is my “when all else fails” approach. It’s brute force, but it’s clean and guaranteed to work.

When to use this: The other methods aren’t working, or you just want to be 100% certain you have a clean slate without any lingering Git weirdness.

  1. Move the problematic subdirectory completely out of your project.
mv src/vendor/some-library /tmp/some-library-backup
  1. Commit the deletion. Your repository now officially has no knowledge of this folder.
git add src/vendor/
git commit -m "chore: Forcibly remove some-library to fix tracking"
  1. Delete the .git folder from your backup copy.
rm -rf /tmp/some-library-backup/.git
  1. Move the folder (now clean of its own Git history) back into your project.
mv /tmp/some-library-backup src/vendor/some-library
  1. Add and commit the files. They will now be seen as brand new additions.
git add src/vendor/some-library/
git commit -m "feat: Re-add some-library files with correct tracking"

Which One Should You Choose?

Here’s a quick breakdown to help you decide.

Method Speed Preserves History Best For…
1. Quick Fix Fastest No (destroys inner repo history) Accidental git init or when you just want the code, not the history.
2. The ‘Right’ Way Medium Yes (for both repos) Managing dependencies and linking separate but related projects correctly.
3. Scorched Earth Slowest No (destroys inner repo history) When things are truly broken and you need a guaranteed clean state.

At the end of the day, don’t feel bad when you run into this. It’s a rite of passage. It’s a quirk of how a powerful tool like Git works, and understanding why it happens is the key to not letting it derail your 3 AM deployment. Hopefully, this gives you a clear path out of the woods next time you find a phantom submodule lurking in your repository.

Darian Vance

👉 Read the original article on TechResolve.blog

Support my work

If this article helped you, you can buy me a coffee:

👉 https://buymeacoffee.com/darianvance

From Side Project to Product: How ClawHosters Became a Real SaaS

I’ve done this 4 times now. Different years, different technologies, completely different markets. And yet, it plays out the same way every single time.

I solve a problem for myself. Then for a few friends. Then strangers ask if they can have it too. And suddenly there’s a product.

RLTracker in 2017 (2.4M trades). Splex.gg in 2021 (60% market share). Golem Overlord in 2022 (20K daily players). ClawHosters in 2026.

The moment of realization? When you’re doing the same setup for the 8th time in two weeks, repeating the exact same steps. That’s not helping friends anymore. That’s a recurring problem screaming for a product.

https://yixn.io/en/blog/posts/side-project-to-product-clawhosters

Mastering Reactive Programming in Modern Mobile Development

Introduction: The Rise of Reactive Programming

In the ever-evolving world of mobile development, one paradigm has gained significant traction in recent years – reactive programming. As mobile apps become more complex, with real-time data streams and event-driven interactions, traditional imperative programming approaches have struggled to keep up. Reactive programming, with its focus on asynchronous data flows and declarative composition, has emerged as a powerful solution to these challenges.

In this article, we’ll dive deep into the world of reactive programming and explore how it can revolutionize your mobile development practices. We’ll cover the core concepts, common use cases, and practical implementation strategies to help you harness the full potential of this paradigm. By the end, you’ll be equipped with the knowledge and tools to master reactive programming and deliver exceptional mobile experiences.

Understanding Reactive Programming

At its core, reactive programming is a programming paradigm that focuses on the propagation of change. Instead of the traditional approach of explicitly managing state and control flow, reactive programming emphasizes the creation of data streams and the transformation of those streams through a series of operators.

The key principles of reactive programming are:

  1. Asynchronous Data Flows: Reactive programming deals with asynchronous data streams, allowing your application to respond to events and updates in real-time, without blocking the main thread.
  2. Declarative Composition: Rather than imperatively defining how your application should behave, you declaratively describe what your application should do, and the reactive framework handles the underlying implementation.
  3. Backpressure: Reactive programming incorporates a mechanism called backpressure, which allows consumers of data streams to control the rate at which they receive data, preventing overload and improving overall system resilience.

These principles, combined with a rich set of operators and tools, make reactive programming a powerful approach for building modern, scalable, and responsive mobile applications.

Reactive Programming in Mobile Development

In the context of mobile development, reactive programming shines when dealing with a variety of common use cases, such as:

1. User Interface Interactions

Reactive programming excels at handling user interface interactions, such as button clicks, gestures, and form inputs. By representing these events as observables, you can easily compose and transform them, creating a more responsive and intuitive user experience.

// Example: Handling button clicks in a Kotlin/Android app
val button: Button = findViewById(R.id.myButton)
button.clicks()
    .debounce(500, TimeUnit.MILLISECONDS)
    .subscribe { 
        // Handle button click event
    }

2. Asynchronous Data Fetching

Reactive programming is particularly well-suited for handling asynchronous data fetching, such as API calls and database queries. By representing these operations as observables, you can easily manage error handling, loading states, and compose multiple data sources.

// Example: Fetching data from an API in a Kotlin/Android app
val apiService: ApiService = retrofit.create(ApiService::class.java)
apiService.fetchData()
    .subscribeOn(Schedulers.io())
    .observeOn(AndroidSchedulers.mainThread())
    .subscribe(
        { data -> // Handle successful response },
        { error -> // Handle error }
    )

3. Real-time Updates and Notifications

Reactive programming shines when dealing with real-time updates and notifications, such as chat messages, stock prices, or location updates. By representing these events as observables, you can easily manage the flow of data and ensure your application remains responsive and up-to-date.

// Example: Handling real-time location updates in a Kotlin/Android app
val locationProvider: LocationProvider = /* ... */
locationProvider.locationUpdates()
    .subscribe { location ->
        // Handle location update
    }

4. Complex State Management

Reactive programming can greatly simplify the management of complex application state, especially in large-scale mobile apps. By using observables to represent the state of your application, you can easily compose and transform this state, ensuring a consistent and predictable user experience.

// Example: Managing application state in a Kotlin/Android app
val appState: Observable<AppState> = /* ... */
appState
    .map { state -> state.toViewState() }
    .subscribe { viewState ->
        // Update the UI based on the view state
    }

Implementing Reactive Programming in Mobile Apps

To get started with reactive programming in mobile development, you’ll need to choose a suitable reactive programming library or framework. Some popular options include:

  • RxJava/RxAndroid: A Java VM implementation of Reactive Extensions, widely used in the Android ecosystem.
  • RxSwift: The reactive programming library for Apple’s Swift language, used in iOS development.
  • Kotlin Coroutines: Kotlin’s built-in support for asynchronous programming, which can be used in conjunction with reactive programming libraries.

Regardless of the specific library you choose, the general implementation process involves the following steps:

  1. Identify Reactive Use Cases: Analyze your mobile application and identify the areas where reactive programming can provide the most value, such as user interface interactions, data fetching, real-time updates, and state management.

  2. Create Observables: Represent your application’s data and events as observables, which can be easily composed and transformed using a rich set of operators.

  3. Manage Subscriptions: Properly manage the lifecycle of your observables and their subscriptions, ensuring that your application remains responsive and efficient, especially when dealing with resources like network requests or location updates.

  4. Leverage Reactive Patterns: Adopt common reactive programming patterns, such as the Repository pattern, the Reactive ViewModel, and the Reactive Coordinator, to organize your code and promote reusability and testability.

  5. Handle Error and Loading States: Implement robust error handling and loading state management strategies to provide a seamless user experience, even in the face of network failures or data processing delays.

  6. Optimize Performance: Utilize reactive programming’s backpressure mechanism and other performance-enhancing techniques to ensure your application remains responsive and efficient, even under high load or limited device resources.

By following these steps and leveraging the power of reactive programming, you can create mobile applications that are more scalable, maintainable, and responsive, ultimately delivering an exceptional user experience.

Conclusion: The Future of Reactive Mobile Development

As mobile development continues to evolve, the adoption of reactive programming will only grow stronger. By embracing this paradigm, you can future-proof your mobile applications, making them more resilient, scalable, and adaptable to the ever-changing demands of the mobile landscape.

Remember, mastering reactive programming is a journey, not a destination. Keep exploring, experimenting, and learning, as new tools, libraries, and best practices emerge. Stay connected with the vibrant reactive programming community, and don’t hesitate to contribute your own insights and experiences.

With the principles and techniques you’ve learned in this article, you’re well on your way to becoming a reactive programming expert, ready to create the next generation of innovative and responsive mobile applications.

References and Further Reading

  • The Reactive Manifesto
  • RxJava Documentation
  • RxSwift Documentation
  • Kotlin Coroutines Documentation
  • Reactive Programming in Android
  • Reactive Programming Patterns for iOS

Reactive Programming Flow Chart

Solved: Anyone submitted their store to ChatGPT?

🚀 Executive Summary

TL;DR: AI crawlers like ChatGPT-User can overwhelm server resources by aggressively crawling, leading to performance issues. This guide provides three strategies, from simple robots.txt directives to robust web server blocks and automated IP blocking, to protect infrastructure and maintain performance.

🎯 Key Takeaways

  • AI training bots (e.g., ChatGPT-User, Google-Extended) are voracious resource consumers, unlike standard search engine bots, and can cause significant server load.
  • The robots.txt file offers a polite, low-effort method to request bots to disallow crawling, but it relies on bot cooperation and is not a command.
  • Blocking bots at the web server level (e.g., Nginx User-Agent check returning 403) is a highly reliable and efficient method that prevents requests from reaching the application.
  • Automated IP blocking with tools like fail2ban is a high-risk, last-resort solution due to the potential for false positives and blocking legitimate users from shared cloud IP ranges.
  • For most cases, a combination of robots.txt and proactive User-Agent blocking on the web server provides an effective defense against unwanted AI crawler traffic.

Struggling with AI crawlers like ChatGPT overwhelming your servers? Learn three practical, real-world strategies—from the simple robots.txt fix to robust server-level blocks—to protect your infrastructure and performance.

My Servers vs. The AI Horde: A Practical Guide to Blocking ChatGPT and Other Bots

It was 3:17 AM. PagerDuty was screaming about CPU utilization on our primary database, prod-db-01. My first thought was a DDoS attack or maybe a botched deployment from the EU team. After 20 frantic minutes digging through logs on our web nodes, I found the culprit… not a malicious actor, but a single, absurdly aggressive User-Agent: ChatGPT-User. It was crawling every single product variant, ignoring every polite ‘slow down’ signal, and was seconds away from creating a resource contention that would have taken down our entire storefront. This wasn’t an attack; it was an overly enthusiastic, uninvited guest eating everything in the pantry.

So, Why Is This Happening?

Let’s get one thing straight: these bots aren’t typically malicious. User agents like ChatGPT-User (from OpenAI) and Google-Extended (for Bard/Gemini) are web crawlers designed to gather massive amounts of data to train Large Language Models (LLMs). The problem is, they are voracious. Unlike the standard Googlebot that indexes for search and is generally well-behaved, these AI training bots can be relentless.

They don’t buy products. They don’t sign up for newsletters. They just consume resources—CPU cycles, database connections, and bandwidth. For a small to medium-sized e-commerce site or application, a single one of these bots can feel like a denial-of-service attack, grinding your application to a halt. So, let’s put a stop to it.

The Fixes: From Polite Request to Fort Knox

I’ve handled this exact scenario more times than I can count. Here are the three levels of defense we use at TechResolve, starting with the simplest.

Solution 1: The “Please Don’t Touch” Sign (robots.txt)

This is your first, easiest, and most “polite” line of defense. The robots.txt file is a standard that asks well-behaved bots not to crawl certain parts of your site, or the whole thing. The good news is that major players like OpenAI and Google claim to respect it.

Simply add the following to the robots.txt file in your website’s root directory:

# Block OpenAI's GPT bot
User-agent: ChatGPT-User
Disallow: /

# Block Google's AI bot
User-agent: Google-Extended
Disallow: /

# You might as well block Common Crawl's bot too
User-agent: CCBot
Disallow: /

Pro Tip: This is a request, not a command. Think of it as a “No Soliciting” sign on your door. Most will respect it, but nothing technically forces them to. If your server is on fire, don’t wait for this to propagate; move on to Solution 2 immediately.

Solution 2: The Bouncer at the Door (Web Server Block)

When politeness fails, we escalate. The most reliable way to stop these bots is to block them at the edge—your web server (like Nginx or Apache) or your CDN/WAF (like Cloudflare). This rejects the request before it ever gets to your application, saving precious resources.

Here’s how you’d do it in Nginx. Add this snippet inside your main server block in your site’s configuration file:

# Block specific unwanted AI bots by their User-Agent string
if ($http_user_agent ~* (ChatGPT-User|Google-Extended|CCBot)) {
    return 403; # Forbidden
}

When the bot tries to connect, Nginx will check its User-Agent, see it on the blocklist, and immediately return a 403 Forbidden error without ever touching your application logic. This is my preferred, set-and-forget method. It’s clean, efficient, and requires no cooperation from the bot.

Solution 3: The ‘Nuclear’ Option (Automated IP Blocking)

Sometimes, you’re dealing with a poorly configured or rogue bot that ignores robots.txt and might even rotate its User-Agent. In this rare case, you have to block it at the firewall level based on its IP address. Doing this manually is a painful game of whack-a-mole, so we automate it with tools like fail2ban.

The strategy is to have fail2ban monitor your web server’s access logs (e.g., /var/log/nginx/access.log). You create a filter that looks for rapid, repeated requests from the same IP address that match a specific pattern (like crawling thousands of product pages in minutes). When the filter is triggered, fail2ban automatically adds a rule to your firewall (like iptables) to drop all traffic from that IP for a set period.

This is powerful, but also dangerous. It’s a last resort for a reason.

Warning: This is a hacky, high-risk solution. AI crawlers often operate from massive cloud provider IP ranges (like AWS or GCP). If you’re not extremely careful with your rules, you could accidentally block a whole subnet and lock out legitimate customers who happen to be using the same cloud infrastructure. Tread very, very carefully here.

Choosing Your Weapon: A Quick Comparison

Here’s a quick breakdown to help you decide which approach is right for you.

Method Effort Reliability Risk
1. robots.txt Lowest Low (Bot-dependent) None
2. Web Server Block Low High Very Low
3. Automated IP Block High Highest High (False Positives)

For 99% of cases, the combination of a robots.txt entry (Solution 1) and a proactive User-Agent block on your web server (Solution 2) is the perfect defense. It keeps your servers running smoothly and lets you sleep through the night without getting a PagerDuty alert from a bot that just wants to read your entire website.

Stay safe out there.

– Darian Vance, Senior DevOps Engineer, TechResolve

Darian Vance

👉 Read the original article on TechResolve.blog

Support my work

If this article helped you, you can buy me a coffee:

👉 https://buymeacoffee.com/darianvance

Janee Setup Guide: Secure API Key Management for OpenClaw, Claude, and Other AI Agents

Introduction

AI coding agents are transforming software development. Tools like Claude Desktop, Cursor, and Cline can write code, debug issues, and even make API calls on your behalf.

But there’s a problem: how do you give these agents API access without compromising security?

The common approach — pasting API keys into config files or prompts — is risky:

  • Keys stored in plaintext on disk
  • Agents can read .env files
  • No audit trail of what was accessed
  • No way to revoke access without rotating keys
  • One prompt injection away from full API access

This guide shows you how to use Janee, a local secrets manager designed for AI agent workflows, to solve these problems.

What is Janee?

Janee is an MCP (Model Context Protocol) server that stores API credentials encrypted on your machine and acts as a secure proxy.

How it works:

  1. You store API keys in ~/.janee/config.yaml (encrypted at rest)
  2. You run janee serve to start the MCP server
  3. Your AI agents connect to Janee via MCP
  4. When an agent needs to call an API, it requests access through Janee
  5. Janee injects the real key server-side, makes the request, and logs everything
  6. The agent receives the API response but never sees your actual key

Key benefits:

  • Encrypted storage: Keys encrypted with AES-256-GCM
  • Zero-knowledge agents: Agents never see the actual credentials
  • Full audit trail: Every request logged with timestamp, service, method, path
  • Policy enforcement: Control what HTTP methods/paths agents can access
  • Configure once, use everywhere: One config, all MCP agents get access
  • Open source (MIT): Full transparency

Prerequisites

  • Node.js 18+ installed
  • An AI agent that supports MCP (Claude Desktop, Cursor, OpenClaw, Cline, etc.)
  • API keys you want to manage (Stripe, GitHub, OpenAI, etc.)

Installation

Install Janee globally via npm:

npm install -g @true-and-useful/janee

Verify installation:

janee --version

Step 1: Initialize Janee

Run the init command to set up your Janee configuration:

janee init

This creates ~/.janee/config.yaml with example services.

Step 2: Add Your API Services

You can add services interactively or via command-line arguments.

Option A: Interactive (recommended for beginners)

janee add

Janee will prompt you for:

  • Service name (e.g., stripe)
  • Base URL (e.g., https://api.stripe.com)
  • Auth type (bearer, basic, hmac-bybit, etc.)
  • API key/credentials

Option B: Command-line arguments

janee add stripe 
  -u https://api.stripe.com 
  --auth-type bearer 
  -k sk_live_xxx

Step 3: Create Capabilities

Capabilities define what agents can do with each service. They include policies like:

  • Time-to-live (TTL)
  • Auto-approval
  • Request rules (allow/deny specific HTTP methods and paths)

Example: Read-only Stripe access

capabilities:
  stripe_readonly:
    service: stripe
    ttl: 1h
    autoApprove: true
    rules:
      allow:
        - GET *
      deny:
        - POST *
        - DELETE *
        - PUT *

Example: Stripe billing (limited write access)

capabilities:
  stripe_billing:
    service: stripe
    ttl: 15m
    requiresReason: true
    rules:
      allow:
        - GET *
        - POST /v1/refunds/*
        - POST /v1/invoices/*
      deny:
        - POST /v1/charges/*  # Can't charge cards
        - DELETE *

Policies are enforced server-side. Even if an agent tries to bypass them, Janee blocks unauthorized requests.

Step 4: Start the MCP Server

janee serve

You should see:

Janee MCP server running on stdio
Config: /Users/yourname/.janee/config.yaml
Logs: /Users/yourname/.janee/logs/

Keep this running. Janee is now ready to accept requests from MCP clients.

Step 5: Configure Your AI Agent

For Claude Desktop

Edit ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or the equivalent on your OS:

{
  "mcpServers": {
    "janee": {
      "command": "janee",
      "args": ["serve"]
    }
  }
}

Restart Claude Desktop.

For Cursor

Edit Cursor’s MCP settings (Settings → Extensions → MCP):

{
  "mcpServers": {
    "janee": {
      "command": "janee",
      "args": ["serve"]
    }
  }
}

For OpenClaw

Install the native plugin:

npm install -g @true-and-useful/janee-openclaw
openclaw plugins install @true-and-useful/janee-openclaw

Enable in your agent config:

{
  agents: {
    list: [{
      id: "main",
      tools: { allow: ["janee"] }
    }]
  }
}

Full integration guides: https://janee.io/docs

Step 6: Test It

Ask your agent to make an API call through Janee.

Example prompt for Claude Desktop:

“Can you check my Stripe account balance using Janee?”

Claude will:

  1. Discover the execute tool from Janee’s MCP server
  2. Call execute with capability stripe, method GET, path /v1/balance
  3. Janee decrypts your Stripe key, makes the request, logs it
  4. Returns the balance data to Claude

Check the audit log:

janee logs

You’ll see:

2025-02-11 14:32:15 | stripe | GET /v1/balance | 200 | User asked for account balance

Understanding Request Policies

Request rules use this format: METHOD PATH

Examples:

Rule Meaning
GET * Allow all GET requests
POST /v1/charges/* Allow POST to /v1/charges/ and subpaths
DELETE * Deny all DELETE requests
* /v1/customers Any method to /v1/customers

How rules work:

  1. Deny rules checked first — explicit deny always wins
  2. Then allow rules checked — must match to proceed
  3. No rules defined → allow all (backward compatible)
  4. Rules defined but no match → denied by default

Common Use Cases

Use Case 1: Read-only GitHub access

services:
  github:
    baseUrl: https://api.github.com
    auth:
      type: bearer
      key: ghp_xxx

capabilities:
  github_readonly:
    service: github
    ttl: 2h
    rules:
      allow: [GET *]
      deny: [POST *, DELETE *, PUT *, PATCH *]

Your agent can read repos, issues, PRs — but can’t create, update, or delete anything.

Use Case 2: OpenAI API with usage limits

services:
  openai:
    baseUrl: https://api.openai.com
    auth:
      type: bearer
      key: sk-xxx

capabilities:
  openai:
    service: openai
    ttl: 30m
    requiresReason: true

Short TTL + requires reason = you can monitor usage and revoke if needed.

Use Case 3: Internal API with strict controls

services:
  internal_api:
    baseUrl: https://api.yourcompany.com
    auth:
      type: bearer
      key: internal_xxx

capabilities:
  internal_readonly:
    service: internal_api
    ttl: 10m
    autoApprove: false  # Manual approval required
    rules:
      allow: [GET /v1/users/*, GET /v1/analytics/*]

Very short TTL, manual approval, specific endpoints only.

Managing Sessions

List active sessions:

janee sessions

Revoke a session:

janee revoke <session-id>

View audit log in real-time:

janee logs -f

Security Best Practices

  1. Use specific capabilities — Don’t give broad access. Create stripe_readonly vs stripe_billing vs stripe_admin.

  2. Set appropriate TTLs — Exploratory work: 1-2h. Sensitive operations: 5-15m.

  3. Enable requiresReason — For sensitive services, make agents provide a reason (logged for audit).

  4. Use request rules — Default deny, explicitly allow only what’s needed.

  5. Monitor audit logs — Regularly review janee logs to see what was accessed.

  6. Rotate keys periodically — Janee makes this easy (update config once, all agents use new key).

  7. Backup your config~/.janee/config.yaml is encrypted but back it up securely.

Troubleshooting

Issue: Agent can’t see Janee tools

Solution: Make sure janee serve is running and your agent’s MCP config points to it. Restart the agent.

Issue: “Permission denied” or “Capability not found”

Solution: Check that the capability name in your config matches what the agent is requesting.

Issue: Requests blocked by rules

Solution: Check janee logs to see which rule blocked it. Adjust your allow/deny patterns in config.

Issue: Keys not encrypted

Solution: Keys are encrypted when Janee reads/writes the config. If you manually edit config.yaml, run janee serve to trigger encryption.

Advanced: HTTP Transport for Containers

If you’re running agents in Docker/Kubernetes, use HTTP transport:

janee serve --transport http --port 9100

Configure your containerized agent to connect via HTTP:

# docker-compose.yml
services:
  janee:
    build: .
    ports:
      - "9100:9100"
    command: janee serve --transport http --port 9100

  agent:
    depends_on:
      - janee
    environment:
      - JANEE_HTTP_URL=http://janee:9100

Full guide: https://janee.io/docs/container-openclaw

Conclusion

You’ve successfully set up Janee to manage API keys for your AI agents.

What you’ve gained:

  • Encrypted credential storage
  • Zero-knowledge agents (they never see your keys)
  • Full audit trail
  • Policy enforcement
  • One config for all your agents

Next steps:

  • Add more services (janee add)
  • Experiment with request policies
  • Set up integrations for all your agent tools
  • Monitor audit logs (janee logs -f)

Resources:

  • Docs: https://janee.io
  • GitHub: https://github.com/true-and-useful/janee
  • Issues/Support: https://github.com/true-and-useful/janee/issues

If you found this useful, give Janee a star on GitHub!

[New Livestream] Go 1.26 Release Party

Join us for the Go 1.26 Release Party, where we’ll celebrate the latest Go release together with the community. In this livestream, Go experts will walk through what’s new in Go 1.26, why the changes matter, and how they impact real-world Go development.

Date: February 19, 2026

Time: 4:00 – 5:30 pm UTC

Register Now

The celebration will feature an expert deep dive by Anton Zhiyanov, Go educator and creator of antonz.org, who will cover the most important updates in Go 1.26 through live coding and practical examples. This will be followed by a session with Alex Rios, showcasing how GoLand supports the new release from day one, helping developers adopt Go 1.26 with confidence.

Expect a mix of technical insights, hands-on demonstrations, and open discussion – plus a few surprises along the way!

Guests

Anton Zhiyanov

Anton Zhiyanov is a backend developer with 20 years of experience in building software systems. He specializes in creating maintainable and efficient software while contributing actively to open-source projects.

Anton has authored a book on Go concurrency and teaches online courses focused on the Go standard library and concurrency. He is known in the community for developing interactive tours for Go releases, a series he has authored for every version starting with Go 1.22.

Website 

GitHub 

X 

LinkedIn

Alex Rios

Alex is a Senior Staff Engineer at Stone, where he builds developer platforms and internal tools that empower engineering teams across the organization. With 17+ years of experience, he’s the author of System Programming Essentials with Go and Learning Zig, and writes about staff engineering and systems thinking on Substack and his personal blog.

Alex speaks regularly at international conferences and is passionate about data-oriented design, making complex systems understandable, and helping engineers grow into technical leadership roles.

Website 

X 

Bluesky

LinkedIn

Happy coding,

The GoLand team

Introducing Databao: The JetBrains Tool That Lets You Talk to Your Data

At JetBrains, we build tools that help teams work with complex systems in a more productive and enjoyable way. As AI becomes part of everyday workflows, a new challenge is emerging for data teams: How can you enable AI-assisted analytics without sacrificing accuracy, transparency, or control over data?

Today, we’re introducing Databao, a new data product from JetBrains designed to bring reliable semantic context to data teams, with the ability to build your own AI agents on top of it. We’re aiming to build an AI-native analytics tool that business users can rely on, alongside the dashboards that teams use every day.

As part of this work, we invite data teams to get in touch with us and launch a Proof of Concept to enable self-serve analytics for business users, discuss your team needs and share feedback throughout the journey. 

TALK TO THE TEAM

Why Databao?

Databao’s mission

Modern data workflows are evolving quickly. Teams need flexibility and scalability as AI becomes a core part of how insights are generated. Sharing and reusing domain context, trusting AI-generated results, and scaling analytics without increasing complexity are some of the main challenges for companies.

Databao was built to solve practical problems that data teams face nowadays:

  • Enabling business users to ask their own data questions in plain language.
  • Relying on consistent, governed business definitions across analyses.
  • Getting more accurate, repeatable results from AI-assisted workflows.
  • Reducing manual back-and-forth and ad-hoc requests.

In practice, this means enabling personalized, self-service analytics that are controllable, scalable, and continuously improve over time.

Providing a self-maintaining semantic layer for companies’ data

Databao’s CLI tool, the context engine, is designed to extract schema and metadata from data sources and give teams a governed layer that captures business logic and definitions from databases, BI tools, and documentation. This keeps context consistent and reusable.

As one of our Alpha users puts it:
“Before the context engine, I had to copy and paste my database schema into the LLM. Now I just point it to the data source and ask it to generate a query – and it works. No more incorrect column types, format mismatches, or hallucinations.”

Enabling agentic analytics

In addition to the context engine, the data agent, available as an open-source and local Python SDK, uses this context to enable users to query, clean, and visualize enterprise data, generating production-quality SQL and outputs that business users can trust.

Another of our early users shared: “The Databao agent joined three to four tables perfectly, which no other data agent can do. I’m literally happy with this.”

The platform that brings it all together

Databao is designed for teams and the people who implement and own data tooling: analytics leads, data engineers, and platform teams.

It starts with a simple local setup and grows naturally as usage and complexity increase. From the first open-source building blocks, we are now evolving into a team-ready SaaS layer that brings shared context, collaboration, and production-grade reliability.

By avoiding vendor lock-in, working across tools, and adapting to different organizational setups, we also aim to make our platform suitable for any production environment, not just for experimentation.

databao_schema

Databao’s trust milestones

Over the past year, we’ve focused on understanding how structured context and a semantic layer can improve the accuracy of agentic answers. This research has informed the foundations of Databao and how we started building our product.

As a result, we recently reached two important milestones: achieving a first-place ranking in the DBT track of the SPIDER 2.0 Text-to-SQL benchmark – one of the most widely recognized evaluations for SQL generation – and joining the Open Semantic Interchange (OSI), an open-source initiative led by Snowflake and other industry leaders to define a shared, vendor-neutral standard for semantic models.

Let’s turn AI analytics into a working POC, together

We are excited to invite teams to build a proof of concept together with the Databao team. We’ll work with you to understand your use case, define a context-building process, and give the agent access to a selected group of business users. Together, we’ll then evaluate the quality of the responses and overall satisfaction with the results.

TALK TO THE TEAM

And if you’d like to explore Databao, you can already try both our context engine and data agent through our open-source libraries.

dotInsights | February 2026

Did you know? C# allows digit separators (_) in numeric literals to make large numbers more readable, e.g., int num = 1_000_000; is valid and equals one million.

dotInsights | February 2026

Welcome to dotInsights by JetBrains! This newsletter is the home for recent .NET and software development related information.

🔗 Links

Here’s the latest from the developer community.

  • Agent Skills: From Claude to Open Standard to Your Daily Coding Workflow | C# 14 Extension Members: Complete Guide to Properties, Operators, and Static Extensions | C# 14 More Partial Members: Partial Events and Partial Constructors – Laurent Kempé
  • Exploring Marshal Methods in .NET MAUI – Leomaris Reyes
  • MCPs for Developers Who Think They Don’t Need MCPs – Angie Jones
  • JetBrains ReSharper for Visual Studio – Karen Payne
  • Enterprise Patterns, Real Code: Implementing Fowler’s Ideas in C# – Chris Woodruff
  • How to use Agent Skills in GitHub Copilot – Daniel Ward
  • .NET Toolbox – Steven Giesel
  • Making foreach on an IEnumerable allocation-free using reflection and dynamic methods – Andrew Lock
  • What Burnout Taught Me About Sustainable Coding Practices – Aicha Laafia
  • AI Makes Code Cheap. That’s Why Design Matters More 🎥 – CodeOpinion
  • The Boolean Trick No C# Developer Knows About 🎥 – Nick Chapsas
  • 2code ^ !2code [S2026E01] Inspector Roslyn says “Hello, World!” – Eva Ditzelmüller & Stefan Pölz
  • ASP.NET Core Pitfalls – Content Type Mismatch – Ricardo Peres
  • A Complete Guide to Converting Markdown to PDF in .NET C# – Bjoern Meyer
  • Deterministic Voice Forms with Blazor and Local LLMs – Scott Galloway
  • Type-Safe Collections in C#: How NonEmptyList Eliminates Runtime Exceptions – Ahmad Al-Freihat
  • Code is a liability (not an asset) – Cory Doctorow
  • Migrating NoSQL Databases: Real-World Lessons Learned – Felipe Cardeneti Mendes
  • Blazor Basics: Should You Migrate to .NET 10? – Claudio Bernasconi
  • Why I Use JetBrains Rider for .NET Development – Emanuele Bartolesi
  • New in .NET 10 and C# 14: EF Core 10’s Faster Production Queries – Ali Hamza Ansari
  • C# – F# Interop (2026 edition) – Urs Enzler
  • Avoiding common pitfalls with async/await at NDC Copenhagen 🎥 – Stephen Cleary 
  • Future Proof with ASP.NET Core API Versioning at NDC Copenhagen 🎥 – Jay Harris
  • Duende IdentityServer 7: A Complete Setup Guide for ASP.NET Core – Tore Nestenius

☕ Coffee Break

Take a break to catch some fun social posts.

🗞️ JetBrains News

What’s going on at JetBrains? Check it out here:

  • How to Write Better AI Prompts as a Software Developer in 2026 🎥
  • Rider 2025.3: Day-One Support for .NET 10 and C# 14, a New Default UI, and Faster Startup
  • ReSharper and Rider 2025.3.2 Updates Out Now!
  • Game Dev in 2025: Excerpts From the State of Game Development Report
  • How We Made Variable Inspections 87 Times Faster for Unreal Engine in Rider
  • ReSharper 2026.1 Early Access Program Has Begun
  • Rider 2026.1 Early Access Program Is Now Open!

✉️ Comments? Questions? Send us an  email. 

Subscribe to dotInsights

Open Source in Focus: .NET Projects and the Tools Behind Them

At JetBrains, we love seeing the developer community grow and thrive. That’s why we support open-source projects that make a real difference – the ones that help developers learn, build, and create better software together. We’re proud to back open-source maintainers with free licenses and contribute to initiatives that strengthen the ecosystem and the people behind it.

In this edition of the Open Source in Focus blog posts series, we spotlight four projects across the .NET ecosystem – each is a good reminder that developer experience is what makes ambitious projects sustainable over time.

Avalonia UI: Cross-platform .NET UI toolkit

Avalonia is an open-source, cross-platform UI framework for building .NET applications. It was launched in 2013 as an attempt to reimplement WPF as an open-source project and has grown steadily over time. The team notes it started gaining mindshare in 2021 and has continued to see significant adoption growth since then.

Our contributors work on all supported desktop platforms: we have some working on macOS full time, some on Linux, and some on Windows. The only IDE that works on all of these platforms is JetBrains Rider. It’s an additional bonus that Rider also has the best Avalonia XAML support out there.

Steven Kirk, Avalonia creator

What’s next: The team’s goal is to keep pushing Avalonia to be “the leading .NET UI toolkit.” They’ve recently released Avalonia Accelerate with phased rollout plans – the first phase includes new Developer Tools, Media Player, and native WebView controls. Later phases will include a packaging tool, a GUI designer, and more. The team is also working on v12, with more news expected in the coming months.

MudBlazor: .NET-first Blazor component library

MudBlazor started when its founders were contributing to other Blazor component libraries and ran into architectural limitations and instability – coding felt like fighting against hidden JavaScript logic, and there was no extensive unit test coverage. 

The creators set out to build a developer-friendly .NET component library, with most functionality written in C#, using JavaScript only when absolutely necessary. The result is a library with 90% test coverage, designed to be stable, well-tested, and easy to debug.

ReSharper has been incredibly helpful in spotting issues in MudBlazor: NullReferenceExceptions, unused values, expressions that are always null, etc. I rely heavily on the Unit Test Explorer and Localization Manager.

Additionally, dotMemory and dotPeek have been invaluable tools for us, especially in tricky cases when users report performance issues or large memory usage. These tools were particularly helpful in improving the performance of our popover system, which had previously experienced problems.

Artyom Melnikov, MudBlazor maintainer

For me personally, ReSharper with its included test runner is the most important tool when working on MudBlazor. It’s a huge productivity booster, as it automatically adds usages, suggests simplifications, highlights unused code, and underlines potential errors. I use it extensively for refactoring whole files or projects. My favorite key combination is Ctrl+T, which lets you jump to a certain type. In a big source base like MudBlazor, this saves a huge amount of time.

Also, dotCover played a huge role for us in our efforts to increase test coverage. I used it to discover untested code regions and to measure the coverage of methods, classes, or entire modules quickly and effectively.

Meinrad Recheis, MudBlazor co-creator

What’s next: The team describes MudBlazor as “very mature”, with an emphasis on keeping complexity limited and the library maintainable by a small team. They expect substantial refactoring ahead to address internal design issues – and say they can do it safely thanks to ReSharper and high test coverage.

LINQ to DB: LINQ-based data access library

The first code that eventually became LINQ to DB dates back to 2002. It started as a simple object mapper and later evolved into a library called BLToolkit. After LINQ support arrived in .NET, the team built a custom LINQ provider and, in 2012, redesigned the approach by extracting the LINQ-related parts into a standalone library: LINQ to DB, which is now a mature, high-performance data access library.

JetBrains IDEs play a critical role in our daily workflow. Rider and ReSharper help us keep our large and complex codebase clean and consistent. Their static analysis, code inspections, and navigation features make it easy to spot issues early and refactor safely. They’re especially helpful when dealing with complex expression tree transformations and query generation logic. 

DataGrip is our go-to tool for interacting with databases during development and debugging. Its support for multiple RDBMSs and rich SQL capabilities align perfectly with LINQ to DB’s multi-database nature, making testing and validation much smoother. 

For testing and performance, we rely on dotCover to ensure our unit tests provide thorough coverage of edge cases and expression scenarios. dotMemory helps us detect and fix memory leaks and inefficiencies, which is especially important for long-running data operations. dotTrace has been instrumental in turning LINQ to DB into the high-performance library it is today. Without it, we simply couldn’t have optimized the expression translation pipeline and query execution paths to the level they are at now.

Igor Tkachev, LINQ to DB creator

What’s next: The team is working on improving the expression tree translation engine to support more advanced LINQ constructs and custom expressions. Better diagnostics, deeper Roslyn-based source generation, and more consistent cross-database behavior are also on the roadmap. Long-term, the team aims for tighter integration with modern .NET features and better usability in async and high-throughput scenarios – without compromising performance.

PeachPie: PHP compiler for .NET

PeachPie (originally called Phalanger) began nearly 20 years ago as an experimental effort to translate PHP into Common Intermediate Language and run it on the .NET runtime, with the hypothesis that this could improve performance and security. Today, the team notes that people use PeachPie for hybrid apps in PHP and C#, including scenarios like WordPress on the frontend with a C# backend in a single project.

We’re experimenting with Rider, trying to support PeachPie PHP/.NET applications in the IDE, providing IntelliSense, design-time analyses, debugging via CLR Debugger, etc.

Jakub Míšek, PeachPie creator

What’s next: Future development is focused on big-picture milestones, such as getting Laravel or Symfony to run on .NET, supported by the smaller functionality, library work, and bug fixes needed along the way.


From our perspective, the most encouraging pattern across these stories is how much maintainability depends on everyday developer ergonomics: safer refactors, strong diagnostics, fast navigation, and tooling that helps teams validate changes with confidence. 

If you’re using any of these projects, consider sharing feedback, filing an issue, or contributing a small improvement – Rider is free for open-source development and ready to help you code, collaborate, and contribute. 

More from this series

Full Scholarships from the JetBrains Foundation – Applications Are Now Open!

The JetBrains Foundation is offering up to 40 full scholarships for the Computer Science and Artificial Intelligence BSc program at Neapolis University Pafos starting this year.

Find the complete program details here.

Check out

What’s New for 2026

  • More scholarships available: The JetBrains Foundation is offering up to 40 scholarships this year – double the previous amount.
  • Earlier start: Admission begins on February 2, 2026, one month earlier than in previous years.

These scholarships are comprehensive, covering tuition, accommodation, medical insurance, visa fees, and a monthly stipend (€300).

Graduates of programs developed in partnership with the JetBrains Foundation are highly sought after, working at leading IT companies such as Meta, Google, and JetBrains. Don’t miss this opportunity to launch your career!

Find the complete program details here.

Key Admission Dates

Round Application Deadline Entrance Test
First Round April 28, 2026 May 3, 2026
Second Round June 9, 2026 June 14, 2026

Upcoming Livestream

Join the program team on February 25, 2026, for a livestream where our experts will explain the curriculum and admission process. Subscribe to our Telegram chat to get the link!

Contact Us

Have questions? Reach out via our Telegram chat or email us at nup@jetbrains.com.

Apply now to take the first step toward a future in one of today’s most dynamic and in-demand fields!