Six Enterprise AI Adoption Challenges and How Docker’s Latest Tools Address Them

AI isn’t coming to your software teams. It’s already there. Developers are running local models, pulling AI-optimized images, connecting autonomous agents to codebases and cloud APIs, and integrating AI tools into every stage of the development lifecycle. The question for security, platform, and executive leadership isn’t whether to allow it. It’s whether you govern it or pretend it isn’t happening.

The risks are well-documented: unpredictable inference costs, unvetted images and tools entering the supply chain, autonomous agents with write access to production systems, and no audit trail across any of it. Without a deliberate architecture, this becomes Shadow AI.

Docker’s recent AI-focused releases address these challenges directly. Here’s how they map to the concerns platform and security teams are navigating right now.

The Challenges (and What Addresses Them)

1. “AI inference costs are unpredictable and growing fast.”

Docker Model Runner + Remocal/MVM + Docker Offload

Docker’s “Remocal” approach pairs local-first development with Minimum Viable Models (MVMs), the smallest models that get the job done. (Docker, “Remocal + Minimum Viable Models”) Docker Model Runner executes these locally through standard APIs (OpenAI-compatible and Ollama-compatible) with three inference engines. (Docker Docs, “Model Runner”) Developers iterate locally at zero marginal token cost and only hit cloud APIs when they need to.

When local hardware isn’t enough, Docker Offload extends the same workflow to cloud infrastructure (L4 GPU currently in beta) without changing a single command. (Docker, “Docker Offload”) The cost lever is clear: local by default, cloud when justified.

2. “Autonomous agents with write access terrify our security team.”

Docker Sandboxes

This is the answer to the “but what if the agent goes rogue” conversation. Each sandbox runs in a dedicated microVM with its own kernel, filesystem, and private Docker daemon. The agent can build, install, test, and run containers, all without any access to the host environment. Only the project workspace is mounted. When you tear down the sandbox, everything inside it is deleted. (Docker Docs, “Sandboxes Architecture”)

This is hypervisor-level isolation, not container-level. Sandboxes already support Claude Code, Codex, Copilot, Gemini, cagent, Kiro, OpenCode, and custom shell. (Docker Docs, “Sandbox Agents”) For standard (non-agent) containers, Enhanced Container Isolation (ECI) provides complementary protection using Linux user namespaces. (Docker Docs, “Enhanced Container Isolation”)

3. “Developers are connecting agents to GitHub, Jira, and databases with no oversight.”

MCP Gateway + MCP Catalog

The open-source MCP Gateway runs every tool server in an isolated container with restricted privileges, network controls, and resource limits. It manages credential injection (so API keys don’t live in developer configs), and it includes built-in logging and call tracing. Every tool invocation is recorded. (Docker Docs, “MCP Gateway”; Docker, “MCP Gateway: Secure Infrastructure for Agentic AI”)

The MCP Catalog provides 300+ curated, verified tool servers packaged as Docker images. Organizations can create custom catalogs scoped to their approved servers, turning “find a random MCP server on the internet” into “pick from the approved list.” Docker is also applying automated trust measures including structured review of incoming changes. (Docker Docs, “MCP Catalog”)

4. “We can’t control what our developers are pulling and running.”

Docker Hardened Images + Registry Access Management + Image Access Management

Docker Hardened Images (DHI) are distroless, minimal base images stripped of shells, package managers, and unnecessary components. Every image ships with an SBOM, SLSA Build Level 3 provenance, and transparent CVE data. (Docker, “Introducing Docker Hardened Images”) DHI is now free and open source (Apache 2.0) with over 1,000 images available, which removes the “it’s too expensive to do the right thing” objection. (Docker Press Release, December 17, 2025)

Registry Access Management (RAM) provides DNS-level filtering to control which registries developers can access through Docker Desktop. (Docker Docs, “Registry Access Management”) Image Access Management adds controls over which types of Docker Hub images are permitted. (Docker Docs, “Image Access Management”) Together, they let your platform team enforce approved sources without slowing anyone down.

This isn’t just for application images. Docker is actively extending hardening to MCP server images, the tools AI agents use to interact with external systems. (Docker, “Hardened Images for Everyone”)

5. “We need an audit trail and we need it yesterday.”

Docker Scout + MCP Gateway logging

Docker Scout provides continuous SBOM and vulnerability analysis across container images in the stack: DHI base images, application images, and MCP server images. (Docker Docs, “Docker Scout”) MCP Gateway logging captures tool-call details with support for signature verification (checking image provenance before use) and secret blocking (scanning payloads for exposed credentials). (Docker, “MCP Gateway: Secure Infrastructure for Agentic AI”; GitHub, docker/mcp-gateway)

Together, these answer the three questions auditors will ask: What’s running? Is it safe? What did the agent do?

6. “We can’t enforce any of this without knowing who’s who.”

SSO + SCIM

Identity is the layer that makes all the others enforceable. RAM policies only activate when developers sign in with organization credentials. Image Access Management is scoped to authenticated users. Audit trails are meaningless without verified identities attached.

SSO authenticates via your existing identity provider. SCIM automates provisioning and deprovisioning. When someone joins or leaves, their Docker access updates automatically. (Docker Docs, “Single Sign-On”)

What This Looks Like Composed

Outcome Docker Tool(s) Why It Matters
Lower AI spend + faster iteration Docker Model Runner + Remocal/MVM + Docker Offload Run more of the dev loop locally to reduce paid API calls and latency during iteration.
Safe autonomy for agents Docker Sandboxes MicroVM isolation + fast reset reduces host risk and cleanup time when agents misbehave.
Governed tool access Docker’s MCP Catalog + Toolkit (including MCP Gateway) Centralize tool servers, apply restrictions, and capture logs/traces for visibility.
Stronger supply-chain posture Docker Hardened Images + RAM + Image Access Management Standardize hardened bases and prevent pulling from unapproved sources.
Fewer vuln/audit fire drills Docker Scout + MCP Gateway logging Continuous SBOM and CVE visibility + tool-call logs improves triage and audit readiness.
Identity-based policy enforcement SSO + SCIM Tie governance controls and audit trails to verified, managed identities across every layer.
Faster CI + hardened non-agent containers Docker Build Cloud + Enhanced Container Isolation (ECI) Reduce build bottlenecks and strengthen isolation for everyday containers.

The Seven-Layer Architecture

For teams ready to go deeper, here is a reference architecture that weaves these capabilities into seven concurrent layers to solve the problems mentioned above.

Layer Docker Tool(s) What It Does
Foundation Docker Hardened Images + RAM + Image Access Management Hardened/minimal base images; registry allowlisting and image-type controls
Definition cagent Declarative YAML agent configs with root/sub-agent orchestration
Inference Docker Model Runner + Remocal/MVM Local-first model execution with Minimum Viable Models; Docker Offload for cloud burst
Execution Docker Sandboxes MicroVM isolation with a private Docker daemon per agent
External Access MCP Gateway + MCP Catalog Governed, containerized tool servers with credential injection and call tracing
Observability Docker Scout + MCP Gateway logging Continuous SBOM/CVE analysis; tool-call audit trails
Identity SSO + SCIM Authentication, user provisioning, and identity-based policy enforcement

For the full architecture walkthrough, including how each layer connects, read the companion overview: From Shadow AI to Enterprise Asset: A Seven-Layer Reference Architecture for Docker’s AI Stack.

How I Wrote This Article

This post was produced through a multi-stage process combining human research and writing with AI tools. I spent a week studying Docker’s AI-focused releases, built the architectural framework, then used AI tools (Gemini, ChatGPT, and Claude) iteratively for drafting, fact-checking, and structural review. For the full methodology, see the “How I Wrote This” section of my deep dive into these concepts: From Shadow AI to Enterprise Asset: A Seven-Layer Reference Architecture for Docker’s AI Stack – The Deep Dive.

When Regex Meets the DOM (And Suddenly It’s Not Simple Anymore)

I recently built a custom in-page “Ctrl + F”-style search and highlight feature.

The goal sounded simple:

  • Support multi-word queries
  • Prefer full phrase matches
  • Fall back to individual token matches
  • Highlight results in the DOM
  • Skip <code> and <pre> blocks

In my head?

“Easy. Just build a regex.”

Step 1: Build the Regex

If a user searches:

power shell

I generate a pattern like:

power[su00A0]+shell|power|shell

The logic:

  • Try to match the full phrase first
  • If that fails, match individual tokens

On paper? Clean.

In isolation? Works.

Step 2: Enter the DOM

This is where things escalated.

Instead of just running string.match(), I had to:

  • Walk the DOM
  • Avoid header UI
  • Avoid <pre>, <code>, <script>, <style>
  • Avoid breaking syntax highlighting
  • Replace only text nodes
  • Preserve structure

That meant using a TreeWalker.

const walker = document.createTreeWalker(root, NodeFilter.SHOW_TEXT, {
  acceptNode(node) {
    const p = node.parentElement;
    if (!p) return NodeFilter.FILTER_REJECT;

    if (p.closest("code, pre, script, style")) {
      return NodeFilter.FILTER_REJECT;
    }

    return NodeFilter.FILTER_ACCEPT;
  },
});

Now we’re not just doing regex.
We’re doing controlled DOM mutation.

Step 3: The Alternation Problem

This is where it got interesting.

Even though the phrase appears first in the alternation:

phrase|token1|token2

The engine still happily matches:

  • power
  • shell
  • PowerShell

Depending on context.

So now the problem isn’t “regex syntax”.

It’s:

  • Overlapping matches
  • Execution order
  • Resetting lastIndex
  • Avoiding double mutation
  • Preventing nested <mark> elements

Step 4: Two Passes?

At one point I thought:

Maybe this shouldn’t be one regex.

Maybe the logic should be:

  1. Try phrase match
  2. If none found, then try token match

Which sounds simple…

Until you realise your DOM has already been mutated once.

Now you’re managing state across passes.

The Realisation

I understand JavaScript logic.

I understand regex.

But applying that logic safely across a live DOM tree?

That’s a different tier of problem.

Regex is deterministic.
The DOM is structural and stateful.

And once you start replacing text nodes, everything becomes delicate.

What I Learned

  • Regex problems are easy in isolation.
  • DOM mutation problems are easy in isolation.
  • Combining them multiplies complexity.

Also:

The line between “simple feature” and “mini search engine” is very thin.

Where I Am Now

The search works.

Mostly.

It highlights.
It skips protected blocks.
It respects structure.

But it’s not a browser-level Ctrl + F.
Not yet.

And that’s the interesting part.

I now respect the DOM far more than I did before.

And I never thought I’d say this sentence naturally:

I get the logic of JavaScript.
Making that logic behave predictably inside a living DOM tree is the real challenge.

There’s still refinement to do.
Edge cases to tame.
State to simplify.

But that’s the line between “feature complete” and “actually robust.”

And I’m somewhere in the middle of that line.

Deploying Secure Azure File Shares: Premium Performance and Network Security

Deploying Secure Azure File Shares: Premium Performance and Network Security

Introduction

Azure Files offers fully managed file shares in the cloud that are accessible via the industry-standard SMB and NFS protocols. For departments like Finance, balancing high performance with strict network security is critical. In this guide, we will walk through deploying a Premium Azure File share, protecting data with snapshots, and restricting access to a specific Virtual Network to ensure enterprise-grade security.

Create and configure a storage account for Azure Files.

Create a storage account for the finance department’s shared files. Learn more about storage accounts for Azure Files deployments.

  1. In the portal, search for and select Storage accounts.
    Storage accounts

  2. Select + Create.
    Create

  3. For Resource group select Create new. Give your resource group a name and select OK to save your changes.
    Resource group

  4. Provide a Storage account name. Ensure the name meets the naming requirements.
    Storage account name

  5. Set the Performance to Premium.
    Premium

  6. Set the Premium account type to File shares.
    File shares

  7. Set the Redundancy to Zone-redundant storage.
    Zone-redundant storage

  8. Select Review and then Create the storage account.
    Review and then Create

  9. Wait for the resource to deploy.
    resource to deploy

  10. Select Go to resource.
    Go to resource

Create and configure a file share with directory.

Create a file share for the corporate office. Learn more about Azure File tiers.

  1. In the storage account, in the Data storage section, select the File shares blade.
    File shares

  2. Select + File share and provide a Name.
    + File share
    Name

  3. Review the other options, but take the defaults.

  4. Select Create
    Create
    Create

Add a directory to the file share for the finance department. For future testing, upload a file.

  1. Select your file share and select + Add directory.
    Add directory

  2. Name the new directory finance.
    finance

  3. Select Browse and then select the finance directory.
    Browse
    finance

  4. Notice you can Add directory to further organize your file share.

  5. Upload a file of your choosing.
    Upload

Configure and test snapshots.

Similar to blob storage, you need to protect against accidental deletion of files. You decide to use snapshots. Learn more about file snapshots.

  1. Select your file share.

  2. In the Operations section, select the Snapshots blade.
    Snapshots

  3. Select + Add snapshot. The comment is optional. Select OK.
    OK

  4. Select your snapshot and verify your file directory and uploaded file are included.
    Select your snapshot and verify your file directory

Practice using snapshots to restore a file.

  1. Return to your file share.
    finance

  2. Browse to your file directory.
    Browse

  3. Locate your uploaded file and in the Properties pane select Delete. Select Yes to confirm the deletion.

    yes

  4. Select the Snapshots blade and then select your snapshot.
    snapshot

  5. Navigate to the file you want to restore,

  6. Select the file and the select Restore.
    Restore

  7. Provide a Restored file name.
    Restored file name

  8. Verify your file directory has the restored file.

file directory has the restored file

Configure restricting storage access to selected virtual networks.

This tasks in this section require a virtual network with subnet. In a production environment these resources would already be created.

  1. Search for and select Virtual networks.
    Virtual networks

  2. Select Create. Select your resource group. and give the virtual network a name.
    Create

  3. Take the defaults for other parameters, select Review + create, and then Create.
    Review + create
    Create

  4. Wait for the resource to deploy.

  5. Select Go to resource.
    Go to resource

  6. In the Settings section, select the Subnets blade.
    Settings

  7. Select the default subnet.
    default

  8. In the Service endpoints section choose Microsoft.Storage in the Services drop-down.
    Microsoft.Storage

  9. Do not make any other changes.

  10. Be sure to Save your changes.
    Save

The storage account should only be accessed from the virtual network you just created. Learn more about using private storage endpoints.

  1. Return to your files storage account.
    files storage ac

  2. In the Security + networking section, select the Networking blade.
    Security + networking

  3. Change the Public network access to Enabled from selected virtual networks and IP addresses.
    Enabled from selected virtual networks

  4. In the Virtual networks section, select Add existing virtual network.
    Add existing virtual network

  5. Select your virtual network and subnet, select Add.
    virtual network

  6. Be sure to Save your changes.
    Save

  7. Select the Storage browser and navigate to your file share.
    Storage browser

  8. Verify the message not authorized to perform this operation. You are not connecting from the virtual network.
    not authorized to perform this operation

Conclusion

By completing these steps, you have successfully deployed a high-performance, resilient file storage solution. Using Premium File Shares with Zone-redundant storage (ZRS) ensures low latency and protection against datacenter failures. Furthermore, by implementing Service Endpoints and restricting traffic to a specific Virtual Network, you have significantly reduced the attack surface of your financial data. This layered approach to security and availability represents best practices for managing sensitive departmental data in Azure.

Migrating to Modular Monolith using Spring Modulith and IntelliJ IDEA

As applications grow in complexity, maintaining a clean architecture becomes increasingly challenging. The traditional package-by-layer approach of organizing code into controllers, services, repositories, and entities packages often leads to tightly coupled code that’s hard to maintain and evolve.

Spring Modulith, combined with IntelliJ IDEA’s excellent tooling support, offers a powerful solution for building well-structured modular monoliths.

In this article, we will use a bookstore sample application as an example to demonstrate Spring Modulith features.

If you are interested in building a Modular Monolith using Spring and Kotlin, check out Building Modular Monoliths With Kotlin and Spring

1. The Problem with Monoliths and Package-by-Layer

Many Spring Boot applications are organized by technical layer rather than by business capability. A typical layout looks like this:

bookstore
  |-- config
  |-- entities
  |-- exceptions
  |-- models
  |-- repositories
  |-- services
  |-- web

This package-by-layer style causes several problems.

The Code Structure Doesn’t Express What the Application Does

When you open the project, you see “repositories,” “services,” and “web,” but not “catalog,” “orders,” or “inventory.” The domain is hidden behind technical folders, which makes it harder for developers to find feature-related code and understand boundaries.

Everything Tends to Become Public

In a layer-based layout, types in one package are often used from many others. To allow that, classes are made public, which effectively exposes them to the whole application. There is no clear “public API” per feature, and hence anything can depend on anything.

Tight Coupling and Spaghetti Code

With no explicit boundaries, services and controllers from different features depend on each other’s internals. For example, order logic might call catalog’s ProductService directly or reuse internal DTOs. Over time this turns into a tightly coupled “big ball of mud” where changing one feature risks breaking others.

Fragile Changes

Adding or changing a feature often forces you to touch code in repositories, services, and web at once, with no clear “module” to test or reason about. Refactoring becomes risky because the impact is hard to see.

In short: package-by-layer encourages a single, undivided monolith with weak boundaries and unclear ownership. Spring Modulith addresses this by turning your codebase into an explicit set of modules with clear APIs and enforced boundaries.

2. What Benefits Spring Modulith Brings

Spring Modulith helps you build modular monoliths: one deployable application, but with clear, domain-driven modules and enforced structure.

Explicit Module Boundaries

Modules are direct sub-packages of your application’s base package (e.g. com.example.bookstore.catalog, com.example.bookstore.orders). Spring Modulith treats each as a module and checks that:

  • Other modules do not depend on internal types unless they are explicitly exposed.
  • There are no circular dependencies between modules.
  • Dependencies between modules are declared (e.g. via allowedDependencies), so the architecture stays intentional.

Clear Public APIs

Each module can define a provided interface (public API): a small set of types and beans that other modules are allowed to use. Everything else is internal. This reduces coupling and makes it obvious how modules interact.

Event-Driven Communication

Spring Modulith encourages events for cross-module communication (e.g. OrderCreatedEvent). It provides:

  • @ApplicationModuleListener for module-aware event handling.
  • Event publication registry (e.g. JDBC) so events can be persisted and processed reliably.
  • Externalized events (e.g. AMQP, Kafka) to integrate with message brokers and other applications.

This keeps modules loosely coupled and makes it easier to later extract a module into a separate service.

Testability

You can test one module at a time with @ApplicationModuleTest, controlling which modules and beans are loaded. You mock other modules’ APIs instead of pulling in the whole application, which speeds up tests and keeps them focused.

Documentation and Verification

Spring Modulith can:

  • Verify modular structure in tests via ApplicationModules.of(...).verify().
  • Generate C4-style documentation from the same model.

So the documented architecture and the actual code stay in sync.

Gradual Migration Path

You can introduce Spring Modulith into an existing Spring Boot monolith step by step: first refactor to package-by-module, then add the Spring Modulith dependencies and ModularityTest, and fix violations one by one. You don’t need to rewrite the application.

3. How to Add Spring Modulith to a Spring Boot Project

Add the Dependencies

Use the Spring Modulith BOM and add the core and test starters:

<properties>
    <spring-modulith.version>2.0.3</spring-modulith.version>
</properties>

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.modulith</groupId>
            <artifactId>spring-modulith-bom</artifactId>
            <version>${spring-modulith.version}</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

<dependencies>
    <!-- other dependencies -->
    <dependency>
        <groupId>org.springframework.modulith</groupId>
        <artifactId>spring-modulith-starter-core</artifactId>
    </dependency>

    <dependency>
        <groupId>org.springframework.modulith</groupId>
        <artifactId>spring-modulith-starter-test</artifactId>
        <scope>test</scope>
    </dependency>
</dependencies>

Enable IntelliJ IDEA Support

Spring Modulith support is bundled in IntelliJ IDEA with the Ultimate Subscription and is enabled by default once the Spring Modulith dependencies are on the classpath.

To confirm the plugin is enabled:

  1. Open Settings (Ctrl+Alt+S / Cmd+,).
  2. Go to PluginsInstalled.
  3. Search for Spring Modulith and ensure it is checked.

You can then use module indicators in the project tree, the Structure tool window, and Modulith-specific inspections and quick-fixes.

Add a Modularity Test

Add a test that verifies your modular structure so that violations are caught in CI:

package com.sivalabs.bookstore;

import org.junit.jupiter.api.Test;
import org.springframework.modulith.core.ApplicationModules;

class ModularityTest {
    static ApplicationModules modules = ApplicationModules.of(BookStoreApplication.class);

    @Test
    void verifiesModularStructure() {
        modules.verify();
    }
}

After refactoring to package-by-module, this test will fail until all boundary and dependency rules are satisfied. Fixing those failures is the main migration work.

4. Converting a Monolith into a Modulith: Refactoring to Package-by-Module

Let’s see how we can convert a monolith application into a modular monolith one step at a time.

Step 1: Reorganize to Package-by-Module

Move from layer-based packages to module-based (package-by-module) packages. Each top-level package becomes a module.

Target structure (example):

bookstore
  |- config
  |- common
  |- catalog
  |- orders
  |- inventory

Practical steps:

  • Create the new package structure (e.g. catalog, orders, inventory, common with subpackages like domain, web, etc).
  • Move classes from entities, repositories, services, web into the appropriate feature package. Prefer package-private (no modifier) for types that should stay internal.
  • Replace a single GlobalExceptionHandler with module-specific exception handlers (e.g. CatalogExceptionHandler, OrdersExceptionHandler) in each module’s web (or equivalent) package.
  • Move and adjust tests to match the new structure.

After this, the code is organized by feature, but Spring Modulith is not yet enforcing boundaries. Adding the dependency and running ModularityTest will surface the next set of issues.

Step 2: Fix Module Boundary Violations

When you run ModularityTest, you’ll see errors such as:

  • Module ‘catalog’ depends on non-exposed type … PagedResult within module ‘common’!
  • Module ‘inventory’ depends on non-exposed type … OrderCreatedEvent within module ‘orders’!
  • Module ‘orders’ depends on non-exposed type … ProductService within module ‘catalog’!

Fixing these errors is where module types, named interfaces, and public APIs come in.

Use OPEN for Shared “Common” Modules

If a module (e.g. common) is meant to be used by many others and doesn’t need a strict API, mark it as OPEN so all its types are considered exposed:

@ApplicationModule(type = ApplicationModule.Type.OPEN)
package com.sivalabs.bookstore.common;

import org.springframework.modulith.ApplicationModule;

Add this in package-info.java in the module’s root package.

Expose Specific Packages with @NamedInterface

When only certain types (e.g. events or DTOs) should be used by other modules, expose that package via a named interface:

@NamedInterface("order-models")
package com.sivalabs.bookstore.orders.domain.models;

import org.springframework.modulith.NamedInterface;

Then other modules can depend on orders::order-models (or the whole module) in their allowedDependencies.

Introduce a Public API (Provided Interface)

When another module needs to call your module’s logic, don’t expose the internal service. Expose a facade or API class in the module’s root package (or a dedicated API package):

package com.sivalabs.bookstore.catalog;

@Service
public class CatalogApi {
    private final ProductService productService;

    public CatalogApi(ProductService productService) {
        this.productService = productService;
    }

    public Optional<Product> getByCode(String code) {
        return productService.getByCode(code);
    }
}

Then in the orders module, depend on CatalogApi instead of ProductService. Spring Modulith will treat CatalogApi as the provided interface and ProductService as internal.

Step 3: Declare Explicit Module Dependencies (Optional but Recommended)

By default, a module may depend on any other module that doesn’t create a cycle. To make dependencies explicit, list allowed targets in package-info.java:

@ApplicationModule(allowedDependencies = {"catalog", "common"})
package com.sivalabs.bookstore.orders;

import org.springframework.modulith.ApplicationModule;

If the orders module later uses something from a module not in this list (e.g. inventory), modules.verify() will fail and IntelliJ will show a violation. This keeps the dependency graph intentional and documented.

Step 4: Prefer Event-Driven Communication

For cross-module side effects (e.g. “when an order is created, update inventory”), prefer events instead of direct calls:

  • Publishing module (e.g. orders): publishes OrderCreatedEvent via ApplicationEventPublisher.
  • Consuming module (e.g. inventory): handles it with @ApplicationModuleListener (and optionally event persistence or externalization).

This avoids the consuming module depending on the publisher’s internals and keeps the path open for later extraction to a separate service or messaging.

Add the following dependency:

<dependency>
    <groupId>org.springframework.modulith</groupId>
    <artifactId>spring-modulith-events-api</artifactId>
</dependency>

Publish events using ApplicationEventPublisher and implement event listener using @ApplicationModuleListener as follows:

//Event Publisher
@Service
class OrderService {
    private final ApplicationEventPublisher publisher;

    void create(OrderCreateRequest req) {
       //...
	var event = new OrderCreatedEvent(...);
       publisher.publish(event);
    }
}

//Event Listener
@Component
class OrderCreatedEventHandler {
    @ApplicationModuleListener
    void handle(OrderCreatedEvent event) {
        log.info("Received order created event: {}", event);
	 //... 
    }
}

Event Publication Registry

The events can be persisted in a persistence store (eg: database) so that they can be processed without losing then on application failures.

Add the following dependency:

<dependency>
   <groupId>org.springframework.modulith</groupId>
   <artifactId>spring-modulith-starter-jdbc</artifactId>
</dependency>

Configure the following properties to initialize the events schema and events processing behaviour:

spring.modulith.events.jdbc.schema-initialization.enabled=true
# completion-mode options: update | delete | archive
spring.modulith.events.completion-mode=update
spring.modulith.events.republish-outstanding-events-on-restart=true

When the application publishes events, first they will be stored in a database table, and after successful processing they will be deleted or marked as processed.

5. How does IntelliJ IDEA Help with Inspections and Quick Fixes?

Spring Modulith violations don’t cause compilation or runtime errors by themselves, they fail Modulith-specific tests (e.g. ModularityTest). IntelliJ IDEA’s Spring Modulith support turns these into editor-time feedback with inspections and quick-fixes so you can fix structure issues as you code.

Inspections and Severity

IntelliJ runs a set of inspections that check your code against Spring Modulith’s rules. By default, they are configured as errors (red underlines), even though the project still compiles. This helps you treat modularity as a first-class constraint.

You can adjust severity in Settings → Editor → Inspections under the Spring Modulith group if you want to start with warnings.

Violations Shown in the Editor

As soon as you introduce a dependency that breaks module boundaries, IntelliJ highlights it. For example:

  • A class in catalog module using PagedResult from common without common being OPEN or exposing that type.
  • A class in orders using catalog’s internal ProductService instead of the public CatalogApi.
  • A class in inventory using orders’ internal OrderCreatedEvent type before it is exposed via a named interface.

You don’t have to run the full test suite to see these issues, they appear as you write or refactor code.

Quick-Fixes (Alt+Enter)

When the cursor is on a Modulith violation, Alt+Enter (or the lightbulb) opens quick-fixes that align the code with the modular structure. Typical options:

  1. Annotate the type with @NamedInterface: Expose the class (or its package) as a named interface so other modules can use it.
  1. Open the module that contains the type: IntelliJ creates or updates package-info.java in that module and marks it as @ApplicationModule(type = ApplicationModule.Type.OPEN), exposing all its types.
  2. Move the component to the base package: Move the bean to the application’s root package so it’s outside any module (use sparingly).

Choosing the right fix depends on your design: use OPEN for shared utility modules, NamedInterface for a few shared types (e.g. events), and public API classes for behavioral dependencies.

Bean Injection and Module Boundaries

IntelliJ’s Spring bean autocompletion is aware of module boundaries. If you try to inject a bean that belongs to another module and is not part of that module’s public API, the completion list can show a warning icon next to that bean. This helps you avoid introducing boundary violations when wiring dependencies.

Undeclared Module Dependencies

When a module has explicit allowedDependencies (e.g. orders only allow catalog and common) but you use a type from another module (e.g. inventory), IntelliJ reports a violation: the dependency is not declared.

Quick-fix: Add the missing module (or the required named interface) to allowedDependencies in the module’s package-info.java. IntelliJ can suggest adding the dependency.

Working with allowedDependencies

In package-info.java, when you edit allowedDependencies = {"..."}, IntelliJ provides:

  • Completion (Ctrl+Space) with:
    • module — dependency on the whole module.
    • module::interface — dependency on a specific named interface.
    • module::* — dependency on all named interfaces of that module.
  • Validation: if a listed module or interface doesn’t exist, IntelliJ highlights the reference so you can fix it before running tests or starting the app.
  • Navigation: Ctrl+B on a module name in allowedDependencies jumps to that module in the Project view.

Circular Dependencies

Spring Modulith’s verification detects cycles between modules, e.g.:

Cycle detected: Slice catalog ->
                Slice orders ->
                Slice catalog

To fix this, you need to break the cycle in code: remove the dependency (e.g. catalogorders) by using events, moving shared types to common, or redefining which module owns which responsibility.

Visualizing Modules in IntelliJ IDEA

Project tool window (Alt+1): Top-level modules are marked with a green lock; internal (non-exposed) components can be marked with a red lock. This gives a quick visual of boundaries.

Structure tool window (Alt+7): With the main @SpringBootApplication class selected, open Structure and use the Modules node to see the list of application modules, their IDs, allowed dependencies, and named interfaces.

Using both views helps you understand and fix dependency and boundary issues quickly.

6. Verifying and Evolving Your Modular Structure

Keep Running ModularityTest

After each refactoring step, run ModularityTest. It should pass, once we have completed the following:

  • All cross-module references go to exposed types (OPEN modules, named interfaces, or public API classes).
  • There are no circular dependencies.
  • Any explicit allowedDependencies include all modules (and interfaces) that are actually used.

6.2 Generate Documentation

You can extend the test to generate C4-style documentation so the architecture is visible and up to date:

@Test
void verifiesModularStructure() {
    modules.verify();
    new Documenter(modules).writeDocumentation();
}

Output is written under target/spring-modulith-docs.

Test Modules in Isolation

Use @ApplicationModuleTest to load only one module (and optionally its dependencies) and mock other modules dependencies:

@ApplicationModuleTest(mode = BootstrapMode.STANDALONE)
@Import(TestcontainersConfiguration.class)
@AutoConfigureMockMvc
class OrderRestControllerTests {
    @MockitoBean
    CatalogApi catalogApi;
    // ...
}

Bootstrap modes control how much of the application is loaded, making tests faster and more focused.

  • STANDALONE (default): Load only the module being tested
  • DIRECT_DEPENDENCIES: Load the module and its direct dependencies
  • ALL_DEPENDENCIES: Load all transitive dependencies

7. Conclusion

Building a modular monolith with Spring Modulith improves long-term maintainability and prepares the codebase for possible extraction of modules into separate services. The main ideas:

  • Avoid package-by-layer: Organize by feature/module (package-by-feature) so that the structure reflects the domain.
  • Define clear boundaries: Use OPEN for shared utility modules, named interfaces for shared types (e.g. events), and public API classes for cross-module behavior.
  • Declare dependencies: Use allowedDependencies so the intended dependency graph is explicit and violations are caught early.
  • Prefer events for cross-module side effects to keep coupling low.
  • Verify continuously with ModularityTest and optional documentation generation.

IntelliJ IDEA’s Spring Modulith support turns modularity into a day-to-day concern: module indicators, Modulith inspections, quick-fixes, and dependency completion help you respect boundaries and fix common issues without leaving the editor. For more detail, see IntelliJ IDEA’s Spring Modulith documentation.

Start by refactoring one area to package-by-feature, add Spring Modulith and a modularity test, then fix violations step by step using IntelliJ IDEA’s feedback to guide the way.

Building LLM-Friendly MCP Tools in RubyMine: Pagination, Filtering, and Error Design

RubyMine enhances the developer experience with context-aware search features that make navigating a Rails application seamless, a powerful analysis engine that detects problems in the source code, and integrated support for the most popular version control systems.

With AI becoming increasingly popular among developers as a tool that helps them understand codebases or develop applications, these RubyMine features provide an extra level of value. Indeed, with access to the functionality of the IDE and information about a given project, AI assistants can produce higher-quality results more efficiently.

To improve AI-assisted workflows, since 2025.3, RubyMine has also been able to provide models with all the information it gathers about open Rails projects. 

In this blog post, we collected how we implemented the new Rails toolset and what we’ve learned about MCP tool design in the process from a software engineering perspective.

What Is Model Context Protocol (MCP)?

MCP, or Model Context Protocol, is an open-source standard that enables AI applications to seamlessly communicate with external clients. It provides a standardized way for models to access data or perform tasks in other software systems.

How MCP Servers Work in IntelliJ-Based IDEs

IDEs built on the IntelliJ Platform come with their own integrated MCP servers, making it easy for both internal and external applications, such as JetBrains AI Assistant or Claude Code, to interact with them. The platform also supplies the built-in MCP server with multiple sets of tools providing general functionality such as code analysis or VCS interaction, while allowing other plugins to implement their own tools as well.

Toolsets supplied by the IntelliJ Platform and RubyMine

RubyMine 2025.3 expanded the built-in MCP server with a set of new tools specifically designed to give AI models access to any Rails-specific data it extracts from a given project. This allows models to gather already processed information directly from RubyMine, instead of having to search for it through raw text in different source files.

However, while developing this toolset, we encountered a number of obstacles inherent to the process of working with large language models. 

Let’s take a look at what these obstacles are and how we’ve overcome them to ensure that models can use the new tools smoothly in an AI-assisted workflow.

Context Window Limit

Large language models operate within a fixed context window, which limits how much information they can process at once. Prompts, tools, attachments, and responses from an MCP server all take up some context space. Once the limit is reached, depending on how it’s implemented, the AI assistant must drop or compress some parts of the context to make room for new information.

The layout of a Large Language Model Context Window.

Consider a large Ruby on Rails application such as GitLab. Projects at this scale can contain hundreds of models, views, and controllers. 

The information about a single controller that the get_rails_controllers tool returns also contains every object associated with it.

{
  "class": "Controller (/path/to/controller.rb:line:col)",
  "isAbstract": false,
  "managedViews": ["/path/to/view.html.erb"],
  "managedPartialViews": ["/path/to/_view.html.erb"],
  "managedLayouts":  ["/path/to/layout.html.erb"],
  "correspondingModel": "Model (/path/to/model.rb:line:col)"
}

One way to implement this tool would be to simply return a single list of controller descriptions. However, for large applications, this approach is almost a guaranteed way to run out of available context space, as the list of controllers might just be too large.

Returned tools not fitting in the context window.

Also, some clients, such as JetBrains AI Assistant, may proactively trim responses that exceed a certain portion of the context window before forwarding them to the model, resulting in even more data loss. 

Pagination Strategies: Offset vs Cursor

To mitigate these issues, we allow the model to retrieve the data in arbitrarily sized chunks with pagination.

get_rails_controllers(page, page_size)

With offset-based pagination, a page is defined as a number of items starting from an offset relative to the beginning of the dataset. Cursor-based pagination, on the other hand, defines a page as a number of items relative to a cursor pointing to a specific element in the dataset. 

Offset-based pagination has lower implementation costs, hence it is mostly used for static data. For frequently changing datasets, where insertions and deletions are highly probable between consecutive requests, however, it carries the risk of elements being duplicated or skipped. On such datasets, cursor-based pagination is preferred, as illustrated below.

Showcasing offset-based and cursor-based paginations.

Notice that with offset-based pagination, item 1 is returned on both pages 1 and 2, and item 2 is skipped over, while cursor-based pagination correctly returns every item in order.

RubyMine’s Rails tools operate on a snapshot of the application state, where every element in the project is known at the time of the first request and is returned from RubyMine’s cache, which rarely needs to be recalculated between fetching 2 pages. Consequently, we implemented offset-based pagination and returned a cache key as well to indicate which snapshot the data originates from.

The LLM receives two pages with a different cache key.

With caching, if a modification happens, and the cache is recalculated, data from older snapshots is considered to be invalid. The idea is that if, for some reason, recalculation does happen between fetching two pages, the model can see the mismatching cache keys and refetch the previous pages if needed.

Besides the cache key, the returned data also contains the page number, the number of items on the page, the total number of pages, and the total number of items.

{
  "summary": {
    "page": 1,
    "item_count": 10,
    "total_pages": 13,
    "total_items": 125,
    "cache_key": "..."
  },
  "items": [ ... ]
}

Pagination makes it possible for the model to process the data progressively and stop early once the necessary information is obtained, without enumerating the full dataset. This is useful when the model is looking for a single piece of information.

The LLM answers a question while using the rails toolset with early stopping.

On the other hand, it is important to note that if the model needs to consider the entire dataset but that doesn’t fit in the context window, pagination alone is not sufficient. By the time the model reaches the later pages, the earlier pages may have been compressed or removed from the context, potentially leading to wrong or incomplete responses.

Data is removed from the LLM context window due to reaching it's limits.

Tool Call Limit

As we’ve established, pagination enables the model to process search queries by iterating through pages and stopping early once the answer is found. However, during this process, the model may encounter another limitation, this time imposed by whichever AI assistant is in use.

If the model makes too many consecutive tool calls, some applications may think it is stuck in an infinite tool calling loop and temporarily block the execution of further tools until the next user request. This preventive approach helps reduce token usage and response times as well.

Tool calls beyond the allowed limit are getting ignored.

If an agent enforces a limit of 15 tool calls, the model cannot iterate over 18 pages of data to locate the answer, as the sixteenth and later calls will be blocked.

This limits scaling the toolset on 2 axes. Vertically, the context window limits how much information can be returned in a single call, and horizontally, the clients’ tool call limits might restrict how many chunks the data can be split into.

Tool call limit and context limit can be visualized on two axes.

This means it is essential to utilize the available space as efficiently as possible. Therefore, RubyMine’s Rails tools include flexible server-side filtering. 

Designing Server-Side Filtering for LLM Efficiency

Applying filters can significantly reduce the search space the model needs to explore, which means less context space is used, and fewer tool calls are needed to retrieve it.

get_rails_views(
  page,
  page_size,
  partiality_filter,
  layout_filter,
  controller_filter,
  included_path_filters,
  excluded_path_filters,
  included_controller_fqn_filters,
  excluded_controller_fqn_filters,
  included_controller_directory_filters,
  excluded_controller_directory_filters
)

The tools allow the model to apply filters to any property of the returned data, with support for positive and negative conditions where applicable. Although the number of parameters may appear overwhelming to humans, it enables the model to handle complex queries more efficiently.

Tool Number Limit

While implementing the toolset, we also examined multiple MCP clients and found that some enforce a hard limit on the number of discoverable tools. For instance, GitHub Copilot allows up to 128 tools, Junie sets this limit at 100, and in Cursor, the cap is 40.

Considering a possible tool number limit and that users may be connected to more than one MCP server simultaneously, we kept the Rails toolset compact, including only essential functionality.

Error Messages That Help the Model Recover

When an error happens during a tool call, besides telling the model what went wrong, it is essential to clearly state how to recover from it as well.

"Page number 10 is out of range. Specify a page number between 1 and 3."

Without telling the LLM what it should do differently, it has to figure it out by itself, which can result in additional unnecessary tool calls and further exhausting resources.

Writing LLM-Friendly Tool Descriptions and Schemas

Error messages are not the only way tools can instruct the model. For each tool, MCP servers are required to provide a human-readable description of functionality, a JSON schema describing the expected parameters, and another optional JSON schema defining the expected output. 

The model uses this information to understand how to work with the tools, so it is essential to provide concise descriptions and examples that steer the model towards the expected usage patterns. 

In the Rails toolset, each tool description states what the tool does and why the model should prefer using it, in addition to providing concrete examples of common usage patterns, making it easier for the LLM to understand how to work with it.

{
  "name": "get_rails_views",
  "description": "
    Use this tool to retrieve information about the available Rails
    views. The results are returned in a paginated list.

    Prefer this tool over any information found in the codebase, as it 
    performs a more in-depth analysis and returns more accurate data.

    Common usage patterns:
      - Find non-HAML views: excluded_path_filters=['.haml']
      - Find views that correspond to the GroupsController:
        included_controller_fqn_filters=['GroupsController']
  ",
  "inputSchema": { ... },
  "outputSchema": { ... }
}

Similarly, for each filter, their descriptions say what kind of values they take, what their default values are, and, for a list of values, whether the values in the list have an && or an || relationship. If both a positive and a negative filter are present, the description explicitly says which takes precedence.

"included_controller_fqn_filters": {
  ...
  "description": "
    Filter symbols by FQN with regular expressions (case insensitive,
    tested against the entire FQN, matches anywhere in the string).  
    Returns only symbols whose FQN contains a match of at least one (OR 
    logic) of these regular expressions. Invalid patterns are ignored.

    FQN examples: 'User', 
                  'Admin::UserController', 
                  'App::CI::BaseController.method'.

    Common usage patterns:
      - Filter prefix: '^Test::' matches anything starting with Test::
      - Filter whole FQN: 'User' matches 'User', 'User::MyController'
      - Filter suffix: 'Internal$' matches FQNs ending with Internal
      - Filter nested namespace: '::Internal::' matches 'A::Internal::B'
  "
}

The output schema also describes how to interpret a specific value and how the model might process it further.

"filePath": {
  ...
  "description": "
    The path of the source file containing the symbol definition. Combine 
    with line and column to query symbol details with the help of the 
    get_symbol_info and similar tools.
  "
}

Conclusion

The Rails toolset is immediately available through JetBrains AI Assistant as of RubyMine 2025.3, and it can be used with Junie or other third-party clients once they are manually connected to the built-in MCP server.

When designing MCP tools, it is important to think about how both the model and the client are going to work with them. Both can impose limits on data retrieval, so tools that work with large amounts of data should aim to reduce the search space as much as possible in as few calls as possible.

Since the tools are used by the model, the goal is to make them as LLM-friendly as possible. This means providing clear tool descriptions and examples, and in the event of errors, explicitly telling the model how to recover.

Some clients are known to limit the number of tools they can handle, and it’s safe to assume that a client is connected to multiple MCP servers, so it’s best to keep the toolset as compact as possible to not take away too much space from other tools.

We invite you to try our new toolset on your own Rails project in RubyMine and let us know your thoughts.

Happy developing!

The RubyMine team

How Modern Developers Are Adapting to AI-Driven Web Development

The rise of artificial intelligence is changing how developers build and maintain web applications. Instead of replacing traditional coding practices, AI tools are becoming assistants that help developers work faster, test efficiently, and improve user experiences.

Today’s development workflow is no longer just about writing code — it’s about combining human creativity with intelligent automation.

AI as a Productivity Tool, Not a Replacement

Many developers use AI-powered tools to generate code suggestions, debug errors, or automate repetitive tasks. While these tools increase productivity, strong programming fundamentals are still essential.

Developers who understand core concepts like system design, API architecture, and performance optimization can use AI more effectively without compromising code quality.

The Importance of Scalable Architecture

AI features often require handling large amounts of data, which makes scalability even more important. Developers focus on creating flexible backend systems that can manage growing user demands while maintaining performance.

Common practices include:

Building modular components

Using microservices architecture

Optimizing server-side logic

Implementing efficient caching strategies

These approaches ensure that applications remain stable as new features are introduced.

Balancing Automation With Creativity

One challenge developers face is balancing automation with creativity. While AI can speed up development, human insight is still necessary for designing user-friendly experiences and solving complex problems.

The best results often come from combining automated tools with thoughtful planning and structured development practices.

Preparing for the Future of Development

As AI continues to evolve, developers are learning new ways to integrate intelligent systems into web and software projects. Staying updated with emerging technologies while maintaining strong fundamentals will be key to building reliable digital products.

Continuous learning, experimentation, and collaboration will shape the next generation of modern development workflows.

I Built a Visual Desktop App for Claude Code — Here’s What I Learned

Claude Code is an incredible AI coding assistant, but it lives in the terminal. I wanted something more visual — a native desktop app where I could manage multiple projects, coordinate AI agent teams, and connect tools with one click.

So I built Pilos Agents.

Pilos Agents demo

What Is Pilos Agents?

Pilos Agents is an open-source (MIT) Electron app that wraps Claude Code with a proper desktop UI. Think of it as the visual layer on top of Claude Code’s CLI.

Key features:

  • Multi-project tabs — Work on multiple codebases simultaneously, each with isolated conversations
  • Multi-agent teams — PM, Architect, Developer, Designer, and Product agents collaborate on tasks with distinct perspectives
  • One-click MCP integrations — Connect GitHub, Jira, Supabase, Sentry, browser automation and more without editing JSON configs
  • Persistent memory — SQLite-backed project memory that carries across sessions and restarts
  • Built-in terminal — Full xterm.js terminal embedded alongside agent output

No lock-in. Your Claude Code CLI does all the AI work. Pilos is just the visual layer on top.

The Multi-Agent Approach

The most interesting feature is multi-agent collaboration. Instead of a single AI assistant, you get a team:

  • PM breaks down tasks and tracks priorities
  • Architect designs system-level solutions
  • Developer writes the code
  • Designer reviews UI/UX and accessibility
  • Product prioritizes based on user value

Each agent has a distinct personality and perspective. When you describe a feature, they discuss it from their angles before implementation begins. It’s like having a tiny product team inside your editor.

MCP: The Integration Layer

Model Context Protocol (MCP) is what makes Claude Code extensible. Pilos makes MCP setup painless:

  • GitHub — Create PRs, review code, manage issues
  • Supabase — Query databases, manage migrations
  • Jira — Read and update tickets from the conversation
  • Browser automation — Let Claude see and interact with your browser
  • Computer Use — macOS screen automation for visual tasks

Instead of hand-editing JSON config files, you toggle integrations on/off with a single click.

Tech Stack

Layer Technology
Desktop Electron
Frontend React 19, Tailwind CSS, Zustand
Build Vite, TypeScript
Terminal xterm.js
Storage better-sqlite3
AI Claude Code CLI (spawned as child process)

The app spawns Claude Code as a child process and communicates via the CLI’s streaming output. This means you get the full power of Claude Code without any API abstraction layer.

Getting Started

Download:

  • macOS (.dmg)
  • Windows (.exe)
  • Linux (.AppImage)

Prerequisites: Claude Code CLI installed and authenticated.

Or build from source:

git clone https://github.com/pilos-ai/agents.git
cd agents
npm install
npm run dev

What’s Next

We’re actively building:

  • More MCP integrations (Linear, Notion, Slack)
  • Custom agent roles and personalities
  • Team sharing and collaboration features

Join the Community

  • GitHub — Star the repo, report issues, contribute
  • Discord — Chat with us and other users
  • Twitter/X — Follow for updates

If you’re using Claude Code and want a better desktop experience, give Pilos a try. And if you build something cool with it, I’d love to hear about it!

Pilos Agents is MIT licensed. The core is free and open source. Pro extensions (browser automation, computer use, Jira, Sentry) are available at pilos.net/pricing.

Built a lightweight Prometheus tool to rightsize CPU/RAM

I recently had to urgently optimize resources across 200 servers in our environment. Our cloud provider doesn’t offer any built-in rightsizing or capacity optimization tools, and I couldn’t find a simple open-source solution focused specifically on CPU/RAM rightsizing based on Prometheus metrics.

The goal was simple:

Analyze historical usage
Calculate realistic resource requirements
Identify reclaimable CPU cores and RAM
The tool:
Pulls metrics from Prometheus
Calculates p95 CPU usage (non-idle cores, summed per instance)
Calculates p95 RAM usage
Applies a configurable safety margin (default 20%)
Ensures CPU minimum = 1 core
Rounds RAM to 0.5 GB
Generates:
Reclaim recommendations
Grow recommendations (if under-provisioned)
Provides a simple web UI to explore:
Total reclaimable CPU/RAM
Per-job breakdown
Per-host details
It supports:
Linux (node_exporter)
Windows (windows_exporter)

Architecture is intentionally simple:
Prometheus → Analyzer → JSON → FastAPI → Web UI
It’s not meant to replace full FinOps platforms — just a focused, practical tool for teams already using Prometheus.

GitHub: https://github.com/grachamba/prom-analyzer
Looking for feedback
Does the p95 + safety margin approach make sense in your environments?

Unleashing the Power of APIs with GraphQL

Introduction to GraphQL

In the realm of API development, GraphQL has emerged as a game-changer, offering a more flexible and efficient approach compared to traditional REST APIs. Let’s delve into the key aspects of GraphQL and how it transforms the way we interact with APIs.

What is GraphQL?

GraphQL is a query language for APIs that enables clients to request only the data they need, allowing for more precise and efficient data retrieval. Unlike REST APIs, where multiple endpoints dictate the structure of responses, GraphQL provides a single endpoint for querying data.

query {
  user(id: 123) {
    name
    email
  }
}

Benefits of GraphQL

  • Efficient Data Fetching: Clients can specify the exact data requirements, reducing over-fetching and under-fetching issues.
  • Strongly Typed: GraphQL schemas define the structure of data, enabling better validation and type checking.
  • Multiple Resources in One Request: With GraphQL, multiple resources can be fetched in a single request, optimizing network efficiency.

GraphQL vs. REST

REST APIs

In REST APIs, each endpoint corresponds to a specific resource or action, leading to multiple endpoints for different data requirements. This can result in over-fetching or under-fetching of data.

GraphQL

GraphQL, on the other hand, allows clients to request exactly the data they need in a single query. Clients can traverse relationships between entities and fetch related data in a single request, enhancing performance.

Implementing GraphQL APIs

To implement a GraphQL API, you need a schema that defines the types and queries available. Tools like Apollo Server or Express-GraphQL can help set up a GraphQL server easily.

const { ApolloServer, gql } = require('apollo-server');

const typeDefs = gql`
type Query {
  user(id: ID!): User
}
type User {
  id: ID
  name: String
  email: String
}
`

const resolvers = {
  Query: {
    user: (parent, args, context, info) => {
      // Resolve user data based on args.id
    }
  }
};

const server = new ApolloServer({ typeDefs, resolvers });
server.listen().then(({ url }) => {
  console.log(`Server ready at ${url}`);
});

Conclusion

GraphQL offers a paradigm shift in API development, empowering developers to build more efficient and flexible APIs. By embracing GraphQL, developers can streamline data fetching, reduce network overhead, and enhance the overall performance of their applications. Embrace the power of GraphQL and unlock a new era of API development!

How Moving to MVC Changed the Way I Write Express Apps

When I first started learning Node.js, everything lived inside a single file.

Routes, business logic, and responses were tightly coupled — and it worked… until it didn’t.

At some point, my Express apps became hard to read, harder to test, and almost impossible to scale.

That’s when I truly understood why structure matters more than speed at the beginning.

Node.js is single‑threaded, but its event‑driven, non‑blocking nature is what makes it powerful.

Express makes HTTP handling simple — but MVC is what makes applications maintainable.

Why MVC actually helped me
MVC isn’t about adding layers for the sake of complexity.

Controllers handle request/response logic
Models represent data (not just databases)
Views render the output
A model can be a database, a file, or even an external API.

Once I separated these concerns, my code became:

Easier to reason about
Easier to test
Easier to extend without fear
A simple controller example

js

exports.getProducts = (req, res) => {
Product.fetchAll(products => {
res.render(‘shop’, { products });
});
};
Clean architecture isn’t about writing more code.

It’s about writing code that survives growth.