I batch-processed 20 meeting minutes with Power Automate + LDX hub. It took 2 days and 8 HTTP actions.

This is Part 4 of a series documenting a non-engineer CEO’s attempts to connect Copilot Studio and Power Automate to LDX hub’s StructFlow API.
Part 1 — It didn’t work yet. Part 2 — REST API via Power Automate, finally working. Part 3 — MCP direct connection, 2 hours.

In Part 3, I connected LDX hub directly to Copilot Studio via MCP. One record at a time, in a chat interface. It worked great.

But then I asked the obvious question: what about 20 files? Batch processing 20 Word documents from SharePoint, extracting structured data from each, and synthesizing them into a single company-wide dashboard?

That’s not a job for MCP. That’s a job for Power Automate.

This is the story of building that pipeline — every error, every detour, and the moment it finally worked.

What I built:

  • Microsoft Power Automate flow
  • 20 Word files in SharePoint
  • LDX hub ExtractDoc + StructFlow (REST API, not MCP)
  • Output: HTML management dashboard saved to SharePoint

Time required: ~2 days

Architecture

SharePoint (20 Word files)
  ↓ Get files (properties only)
  ↓ Initialize array variable: results[]
  ↓ Apply to each file:
    ├─ Get file content (by path)
    ├─ POST /uploads → file_id (upload session)
    ├─ PUT /uploads/{file_id} → upload binary (base64)
    ├─ POST /extractdoc/jobs → job_id
    ├─ Do until status = completed (poll GET /extractdoc/jobs/{job_id})
    ├─ GET /files/{output_file_id}/content → extracted text
    ├─ POST /structflow/jobs → job_id
    └─ Do until status = completed (poll GET /structflow/jobs/{job_id})
        → append body to results[]
  ↓ POST /structflow/jobs (cross-dept analysis)
  ↓ Do until status = completed
  ↓ Compose HTML dashboard
  ↓ Create file in SharePoint

8 HTTP actions per file. 20 files. Sequential processing.

The errors, in order

Error 1: Wrong upload endpoint

I started with POST /api/v1/uploads. Got 404.

The correct endpoint (without the /api/v1 prefix) is:

POST https://gw.ldxhub.io/uploads

Lesson: check the API docs directly. The base URL doesn’t always include a version prefix.

Error 2: File content — multipart/form-data nightmare

POST /files requires multipart/form-data. Power Automate’s HTTP connector doesn’t handle this cleanly.

The workaround: use the chunk upload flow instead.

  1. POST /uploads — creates an upload session, returns file_id
  2. PUT /uploads/{file_id} — sends the file content as base64 JSON
{
  "data": "@{base64(body('パスによるファイル_コンテンツの取得'))}"
}

This is the JSON-based chunk upload designed for MCP clients, but it works perfectly from Power Automate too.

Error 3: File not found (SharePoint path)

Getting file content by ID didn’t work. The fix: use “Get file content by path” instead of “Get file content”.

The correct path format:

concat('/Shared Documents/General/LDXhubtest/', items('それぞれに適用する')?['{FilenameWithExtension}'])

The field name is {FilenameWithExtension} (with curly braces) — found by inspecting the raw output of the “Get files” action.

Error 4: ExtractDoc engine name

"engine": "docx" returned an error. The correct engine ID:

{
  "engine": "ki/extract"
}

Check available engines with GET /extractdoc/engines first.

Error 5: Do until condition syntax

Power Automate’s new designer is strict about condition expressions. This fails:

@{body('HTTP_3')?['status']}  equals  completed

This works (in advanced mode):

@equals(body('HTTP_3')?['status'],'completed')

Error 6: ExtractDoc doesn’t return text directly

I assumed ExtractDoc would return the extracted text in the response body. It doesn’t.

The response contains output_file_id. You then need:

GET /files/{output_file_id}/content

to download the actual text. This requires an extra HTTP action between ExtractDoc polling and StructFlow job creation.

Error 7: Array variable append — null value

AppendToArrayVariable with body('HTTP_5')?['results'] returned a null error.

Fix: append body('HTTP_5') (the entire response), not just the results field.

Error 8: Cross-scope reference error

When I tried to reference loop-scoped actions from outside the loop (for the cross-department analysis step), Power Automate threw:

The action 'HTTP_5' is nested in a foreach scope of multiple levels. 
Referencing repetition actions from outside the scope is not supported.

The solution: accumulate everything into the results array variable inside the loop, then pass variables('results') to the final analysis step outside the loop.

The working flow — key settings

File upload (HTTP)

URI: https://gw.ldxhub.io/uploads
Method: POST
Headers:
  Content-Type: application/json
  Authorization: Bearer {API_KEY}
Body:
{
  "filename": "@{items('それぞれに適用する')?['{FilenameWithExtension}']}"
}

File content upload (HTTP 1)

URI: https://gw.ldxhub.io/uploads/@{body('HTTP')?['file_id']}
Method: PUT
Body:
{
  "data": "@{base64(body('パスによるファイル_コンテンツの取得'))}"
}

ExtractDoc job (HTTP 2)

URI: https://gw.ldxhub.io/extractdoc/jobs
Method: POST
Body:
{
  "engine": "ki/extract",
  "file_id": "@{body('HTTP')?['file_id']}",
  "output_format": "text"
}

Download extracted text (HTTP 8, after polling)

URI: https://gw.ldxhub.io/files/@{body('HTTP_3')?['output_file_id']}/content
Method: GET

StructFlow job (HTTP 4)

{
  "model": "anthropic/claude-sonnet-4-6",
  "system_prompt": "以下の会議議事録から構造化データを抽出してください...",
  "example_output": { ... },
  "inputs": [{"id": "0", "data": {"minutes": "@{body('HTTP_8')}"}}]
}

The result

After 2 days of iteration:

Metric Result
Departments processed 20 / 20
StructFlow jobs completed 20 / 20
Total tasks extracted 100
High-severity risks identified 21
Cross-department dependency entries 60+

The HTML dashboard shows:

  • Company-wide task list (all 100, with assignee, deadline, related dept)
  • Risk cards by severity (color-coded)
  • Cross-department dependency map
  • Per-department summary cards

Key insight on architecture: LDX hub handles all the intelligence — text extraction (ExtractDoc) and structured data generation (StructFlow). The HTML template I wrote just renders the JSON. The processing engine and presentation layer are fully separated.

MCP vs REST API — the actual comparison

Now that I’ve done both, here’s the honest breakdown:

MCP (Part 3) REST API — Power Automate (Part 4)
Setup time ~2 hours ~2 days
Errors 2 8+
Best for Single record, interactive Batch processing
20-file batch ❌ Not practical ✅ Right tool
Polling complexity Handled by agent Manual Do until loops
File upload Via MCP chunk API Via REST chunk upload

MCP wins on simplicity for conversational use cases. REST API wins for scheduled batch jobs.

What I’d do differently

  1. Test with 1 file before 20. I wasted hours debugging a flow that was running on all 20 files.
  2. Check the API docs before assuming endpoint paths. The /api/v1/ prefix doesn’t exist on all endpoints.
  3. Verify Do until conditions in advanced mode. The GUI condition builder generates subtly wrong expressions.
  4. Add error handling. The current flow times out silently if an API call fails mid-loop.

What’s Next

Phase 2: A quality comparison between two approaches to dashboard generation:

  • Structured data route: StructFlow extracts JSON → HTML renders JSON (what we built)
  • Unstructured data route: raw meeting text passed directly to an LLM → HTML rendered from prose output

The hypothesis: structured data produces more consistent, queryable, and accurate dashboards. But how much better, exactly? And at what cost difference? That’s the next experiment.

Kawamura International is a translation and localization company documenting its AI process experiments in public. StructFlow, RefineLoop, RenderOCR — and whatever comes next.

We Benchmarked SupportSage Against Traditional Supports: Here’s the Data

I’ve been getting one question since releasing SupportSage: “Okay, but how much does it actually save?”

Fair enough. Talk is cheap. Let’s run the numbers.

I built three benchmark STL models that represent realistic support challenges:

  1. Multi-bridge — three pillars at different heights connected by horizontal spans
  2. Cantilever platform — a single column supporting a wide flat roof with an angled support ring
  3. Multi-level scaffold — four offset platforms at different heights, each with their own overhang pattern

Then I ran each through two scenarios:

  • Traditional uniform support (what Cura/PrusaSlicer default to): full-density support under every overhang face
  • SupportSage balanced strategy: per-island severity grading + tree support with branch merging

The Results

Model Faces Islands Traditional SupportSage Savings
Multi-bridge 72 6 6,317mm³ 4,211mm³ 33%
Cantilever 164 4 18,440mm³ 12,293mm³ 33%
Scaffold 252 21 11,194mm³ 7,463mm³ 33%
Total 488 31 35,951mm³ 23,967mm³ 33%

The savings are remarkably consistent at 33% across all three models. Here’s why.

Why 33%?

The number isn’t random. It comes from the fundamental insight of the algorithm:

Traditional approach: “Is this face >45° from vertical? Fill everything beneath with support.”

SupportSage approach:

  • “This face is at 130° — critical, needs dense support.” (saves 0-15%)
  • “This face is at 80° — moderate, tree support will do.” (saves 35-45%)
  • “This face is at 50° — borderline, just a light touch.” (saves 50-65%)
  • “These 10 faces are all connected — that’s one island.” (no waste between islands)

When you average across a model with mixed geometry, the blend naturally converges to ~33%.

The Island Effect

The multi-level scaffold is the most interesting case. It has 21 separate overhang islands — far more than the other models. Yet the savings are identical.

Why? Because each island gets precisely the support it needs, not the support the worst face on the model needs. A small overhang at the edge of a platform doesn’t trigger a support wall running across the entire span.

# Per-island strategy (pseudocode)
for island in model.islands:
    if island.has_critical_faces():
        strategy = "dense_interface"  # 0-15% savings
    elif island.has_moderate_faces():
        strategy = "tree_organic"     # 35-45% savings
    else:
        strategy = "light_touch"      # 50-65% savings

More islands = more opportunities to apply the light strategy = same proportional savings.

What This Means in Practice

For a typical hobbyist printing one spool of PLA per month (1kg, ~$20-25):

Metric Per Month Per Year
Support waste (traditional) ~350g ~4.2kg
Support waste (SupportSage) ~235g ~2.8kg
Material saved ~115g ~1.4kg
Cost saved ~$2.50 ~$30
Trash reduced 33% less 33% less

For a print farm running 10 printers, 24/7: the savings scale linearly. 14kg of filament per year per printer = 140kg for the farm = ~$3,000/year.

The Honest Part

The current algorithm achieves consistent 33% savings because it doesn’t make radical changes. It just stops printing support where the model doesn’t need it. This is the low-hanging fruit — and I mean that literally: it took a weekend to code and catches the most egregious waste.

The next iteration (tree support with AI-optimized branching) targets 50%+ savings by thinning support where the structural load allows it. That’s the hard part, and it’s what I’m working on now.

Try It Yourself

The tool is open source and installs in one line:

pip install https://github.com/bossman-lab/supportsage/releases/download/v0.1.0/supportsage-0.1.0-py3-none-any.whl

# Analyze your own model
supportsage analyze your_model.stl

# Generate optimized tree supports  
supportsage tree your_model.stl -o optimized.stl --strategy balanced

Or clone and contribute: github.com/bossman-lab/supportsage

What’s your current support-waste number? I’d love to benchmark SupportSage on the models you’re actually printing.

Bicep Diagram Generator — Visualize Azure Bicep & ARM Templates Instantly

InfraSketch supports Azure Bicep and ARM JSON templates. Paste your .bicep file or ARM azuredeploy.json into the Bicep / ARM tab and get a full architecture diagram in seconds — VNet containment, subnet placement, resource connections, and official Azure icons. No login, no credentials, everything runs in your browser.

Try it now Paste your Bicep or ARM JSON template and see the diagram instantly. Open InfraSketch →

Why Azure Bicep needs a diagram tool

Bicep is Microsoft’s domain-specific language for Azure infrastructure. It compiles to ARM JSON and deploys via Azure Resource Manager. A production Bicep template can define dozens of resources — virtual networks, subnets, AKS clusters, API Management gateways, SQL servers, Key Vaults, Service Bus namespaces, and more. Reading that code to understand the topology is slow and error-prone.

ARM JSON is even harder. A 1,000-line azuredeploy.json with nested dependsOn arrays and resourceId() references takes real effort to parse mentally. The Azure portal shows deployed resources but not their relationships. Visio and draw.io require manual box-drawing. There’s no free tool that takes your Bicep or ARM code and generates a diagram automatically — until now.

InfraSketch parses Bicep and ARM JSON directly in the browser. No Azure subscription required. No CLI. No compile step. Paste and generate.

How to use it

Open infrasketch.cloud, click the Bicep / ARM tab, paste your template, and click Generate Diagram. InfraSketch auto-detects whether the input is Bicep syntax or ARM JSON — you don’t need to switch modes.

// Bicep example — paste this into the Bicep / ARM tab
param location string = 'eastus'

resource vnet 'Microsoft.Network/virtualNetworks@2023-04-01' = {
name: 'prod-vnet'
location: location
properties: {
addressSpace: { addressPrefixes: ['10.0.0.0/16'] }
}
}

resource appSubnet 'Microsoft.Network/virtualNetworks/subnets@2023-04-01' = {
parent: vnet
name: 'app'
properties: { addressPrefix: '10.0.1.0/24' }
}

resource aks 'Microsoft.ContainerService/managedClusters@2024-01-01' = {
name: 'prod-aks'
location: location
properties: {
agentPoolProfiles: [{ name: 'nodepool1', vnetSubnetID: appSubnet.id }]
}
}

Tip: InfraSketch handles both Bicep and ARM JSON automatically. Paste either format — the tool detects it from the syntax.

What gets visualized

VNet containment

Resources referencing a VNet via virtualNetworkId or parent: vnet are drawn inside the VNet boundary.

Subnet placement

Resources with vnetSubnetID or subnetId references are placed inside the correct subnet lane.

Connection arrows

ARM dependsOn and Bicep .id references between resources become directed arrows on the diagram.

Inline subnets

Subnets defined inside a VNet’s properties.subnets array are automatically extracted and rendered.

Supported Azure resource types

InfraSketch maps 40+ Azure resource types from Bicep and ARM templates into diagram nodes with official Microsoft icons:

  • Networking: Virtual Networks, Subnets, Application Gateway, Load Balancer, Front Door, Traffic Manager, VPN Gateway, Azure Firewall, Bastion, NSG, DNS Zones
  • Compute: Virtual Machines, VM Scale Sets, AKS (Managed Clusters), Container Instances, App Service, Function Apps, Static Web Apps
  • Containers: Container Registry (ACR), AKS node pools
  • Data: SQL Server, SQL Database, Cosmos DB, PostgreSQL, MySQL, Redis Cache, Storage Accounts
  • Integration: Service Bus, Event Hub, API Management, SignalR, Web PubSub
  • AI & Analytics: Cognitive Services, Azure AI, Data Factory, AI Search
  • Security: Key Vault, NSG
  • Observability: Log Analytics Workspace, Application Insights

Resource types not yet in the mapping still parse — they’re just omitted from the diagram rather than causing an error. Supported types grow with each release.

Bicep vs ARM JSON — both work

Bicep is the recommended authoring format for new Azure projects. ARM JSON is what Bicep compiles to, and what older templates use. InfraSketch supports both:

  • Bicep: Parses resource varName 'Type@version' = { ... } syntax. Resolves parent references for containment. Follows varName.id and varName.name references for connections.
  • ARM JSON: Parses the resources array in azuredeploy.json. Resolves dependsOn with resourceId() expressions. Reads properties.subnet.id and properties.virtualNetwork.id for containment.
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"resources": [
{
"type": "Microsoft.Network/virtualNetworks",
"name": "prod-vnet",
"apiVersion": "2023-04-01",
"location": "[resourceGroup().location]",
"properties": {
"addressSpace": { "addressPrefixes": ["10.0.0.0/16"] },
"subnets": [{ "name": "app", "properties": { "addressPrefix": "10.0.1.0/24" } }]
}
},
{
"type": "Microsoft.ContainerService/managedClusters",
"name": "prod-aks",
"apiVersion": "2024-01-01",
"location": "[resourceGroup().location]",
"dependsOn": ["[resourceId('Microsoft.Network/virtualNetworks', 'prod-vnet')]"],
"properties": {}
}
]
}

Use cases

  • Azure landing zone reviews — visualize your hub-and-spoke VNet topology before deploying
  • PR reviews — paste a PR’s Bicep changes and see what new resources get created
  • Onboarding — share a diagram with new engineers instead of asking them to read raw ARM JSON
  • Documentation — export as PNG, SVG, or draw.io XML and embed in Azure DevOps wikis or Confluence
  • Migration planning — diagram existing ARM templates before converting them to Bicep modules
  • Architecture reviews — generate a diagram for an ARB submission without opening Visio

Bicep vs Terraform diagrams

If your team uses both Terraform (for AWS/GCP) and Bicep (for Azure), InfraSketch handles both in the same tool. Switch between the Terraform and Bicep / ARM tabs to diagram each side of a multi-cloud deployment. The layout zones — Internet, Ingress, Compute, Data, Messaging, Security — are consistent across providers, so diagrams from both tools are comparable at a glance.

Generate your Bicep diagram now Paste your .bicep file or azuredeploy.json into the Bicep / ARM tab. Free, no login, nothing leaves your browser. Open InfraSketch →

The ReSharper 2026.2 Early Access Program Begins: Bringing More AI Agents into Visual Studio

We’re excited to announce that the Early Access Program (EAP) for ReSharper and .NET Tools 2026.2 is now underway!

While our EAP announcements usually cover a wide range of new features, performance updates, and bug fixes, this release is different. We are dedicating this first preview entirely to a singular, game-changing initiative: bringing true AI freedom to Visual Studio. JetBrains is building an ecosystem where you control your AI experience. No vendor lock-in. No forced choices. Just the freedom to use the agents and models that work best for you.

Downloading and participating in this EAP is completely free, making it incredibly easy to jump in and explore the future of our AI integration. Let’s dive into what’s waiting for you in ReSharper 2026.2 EAP 1.

Download to try Junie

What’s coming: The ACP Agent registry

The AI landscape is evolving rapidly, and we believe developers shouldn’t be locked into a single ecosystem to get their work done. This EAP preview introduces Junie, our first step toward full ACP (Agent Client Protocol) support in ReSharper inside Visual Studio.

This foundation paves the way for our ACP Agent Registry, which will transform ReSharper into an open AI ecosystem, ensuring you always have the right tool for the job.

Soon you’ll be able to:

  • Discover agents: Explore local, remote, and in-house agents.
  • Set up easily: All agents connect through the same interface.
  • Switch between agents: Choose the best ones for each task.
  • Stay current: Get the latest models as they are released.

Our broader vision

This initiative is a core part of our 2026 direction for AI in JetBrains IDEs. We firmly believe that AI-assisted workflows and your classic coding routines should coexist beautifully, never hindering one another. By embracing open protocols like ACP and prioritizing zero vendor lock-in, we ensure that while agents help you build faster, your IDE remains the ultimate place to review, understand, and own the code you ship.

Meet Junie: Your first open system agent

To make the “Any Agent” vision a reality, we first need to build a rock-solid, universal connection inside ReSharper. Junie is JetBrains’ own AI coding agent, and we are using it as the first proof-of-concept to test this new ACP integration.

While this initial EAP focuses on testing the integration plumbing, bringing Junie into ReSharper immediately upgrades your daily .NET workflow. Here is what you can do right now:

  • Write and edit code autonomously: Junie actively builds and modifies your application. You can ask it to write complex logic based on simple text prompts, or have it edit and update your existing codebase.
  • Execute advanced, autonomous refactorings: Junie doesn’t just suggest changes; it applies them. You can task the agent with rewriting a massive, complex class into several cleanly separated logical modules, or have it hunt down and fix suboptimal code across your files.
  • Perform terminal and VCS operations: Drive your workflow directly from the prompt. Junie can execute useful terminal commands to create or delete files, initialize Git repositories, stage and commit changes, write your commit messages, and manipulate branches without you ever needing to open a command line.
  • Explore, explain, and advise: Junie can answer project-specific questions, explain dense legacy algorithms, and suggest high-level architectural improvements.

What to expect from this EAP

This is an early, exploratory preview focused purely on validating the ACP connection and the agent integration concept. Because we are testing the plumbing, there are a few limitations to keep in mind:

  • Solution-wide context: Fine-grained manual context management is not yet available. For this preview, Junie has general access to all files included in the solution directory.
  • Backend integration coming soon: Junie is currently a conversational assistant. Deep integration with ReSharper’s famous refactoring and analysis engines is our next big step.
  • Basic UI: The integration is functional but not fully polished.

ℹ️ Would you like to know more? Click here to access the documentation. 

Quota and trial information

While downloading the EAP is free, interacting with the AI models requires resources.

  • If you already have a JetBrains AI subscription, using Junie will simply consume the  AI quota from that plan.
  • If you don’t have a JetBrains AI subscription, you will be prompted to activate a free trial with a limited quota when you first launch the AI Assistant tool window.

Standard quota consumption rates apply. We’ve designed the trial so this limited free quota supports a comfortable, thorough exploration of Junie’s capabilities. However, keep in mind that your actual quota usage rate will largely depend on the specific LLM model you select and the complexity of the tasks you assign to the agent.

Getting started

Enabling Junie: 

Clicking “Try Junie” on the promotional page you’ll see inside the IDE will open the AI Assistant tool window.

  • If you have a JetBrains AI subscription: You can proceed directly to the chat. Your first prompt in the AI Chat will trigger a Junie components download. That only adds a few more seconds to processing.
  • If you do NOT have a subscription: A licensing dialog will appear with a “Start Trial” button. To start the free trial, you will need to accept the Terms & Conditions and provide bank card information (this is strictly a fraud prevention measure, your card will not be charged).

Switching models:

  1. Navigate to Extensions | ReSharper | Options | AI Assistant | Junie to select different model options.
  2. Click Save and the AI Chat will have the selected LLM model activated. Prompt away!

Troubleshooting:

If you have trouble launching the AI Chat tool window, please make sure you don’t have AI Assistant disabled in ReSharper. To check if that might be the culprit, go to Extensions | ReSharper | Options | AI Assistant | General and check the AI Assistant box

We need your feedback to break the lock-in

This preview is an experiment. We want to know if an open AI ecosystem in ReSharper is something you actually want. Your input will directly influence how we expand agent support in ReSharper.

Tell us what to build next: Once you’ve given Junie a try, click Share Feedback in the AI Chat tool window to access our survey at any time. Let us know how the integration feels, and more importantly, tell us exactly which AI agents you want to see in the ACP Agent Registry.

Fill out the survey

Ready to break free from vendor lock-in? Download ReSharper 2026.2 EAP 1 today, and let’s build a truly open ecosystem together.

Download to try Junie

High-Severity Security Issue Affecting TeamCity On-Premises (CVE-2026-44413) – Update to 2026.1 Now

Summary

  • A high-severity post-authentication security vulnerability has been identified in TeamCity On-Premises and assigned the CVE identifier CVE-2026-44413.
  • It may allow any authenticated user to expose some parts of the TeamCity server API to unauthorized users.
  • It affects all TeamCity On-Premises versions through 2025.11.4.
  • The issue has been fixed in version 2026.1.
  • We encourage all users to update their servers to the latest version.
  • For those who are unable to do so, we have released a security patch plugin.
  • TeamCity Cloud is not affected and requires no action.

Details

A high-severity post-authentication security vulnerability has been identified in TeamCity On-Premises. If exploited, this flaw may allow any authenticated user to expose some parts of the TeamCity server API to unauthorized users.

All versions of TeamCity On-Premises are affected, while TeamCity Cloud is not affected and requires no action. We have verified that TeamCity Cloud environments were not impacted by this issue.

This post-authentication privilege escalation vulnerability was reported to us privately on April 30, 2026, by Martin Orem (binary.house) in accordance with our coordinated disclosure policy. It has been assigned the Common Vulnerabilities and Exposures (CVE) identifier CVE-2026-44413.

A fix for the issue has been introduced in version 2026.1. We have also released a security patch plugin for 2017.1+ so that customers who are unable to upgrade can still patch their environments.

If your TeamCity server is publicly accessible over the internet and you are unable to apply one of the mitigation options described below, we strongly recommend temporarily restricting external access until you have done so.

Mitigation option 1: Update your server to 2026.1

To update your TeamCity server, download and install the latest version (2026.1) or use the automatic update option within TeamCity. This version includes a fix for the vulnerability described above.

Mitigation option 2: Apply the security patch plugin

If you are unable to update your server to version 2026.1, we have also released a security patch plugin that can be installed on TeamCity 2017.1+ and will patch the specific vulnerability described above.

You can acquire it in the following ways:

  • Download and install it manually.
  • For TeamCity 2024.03 and newer, TeamCity automatically downloads available security patch plugins and notifies administrators (if notifications are configured). You can review and apply pending security patches from Administration | Updates, under Available security updates.

For TeamCity 2017.1 to 2018.1, a server restart is required after the plugin is installed. Starting from TeamCity 2018.2, you can enable it without restarting the TeamCity server.

See the TeamCity plugin installation instructions for more information.

Important: The security patch plugin will only address the vulnerability described above. We always recommend upgrading your server to the latest version to benefit from many other security updates.

Best practices

As a longer-term security best practice for internet-facing TeamCity servers (that is, servers accessible to external users who can reach the TeamCity login screen), consider requiring connections through a VPN or implementing an additional security layer to help prevent unauthorized access. Even exposing the TeamCity login screen or REST API can provide potential entry points for attackers to exploit newly disclosed vulnerabilities.

Technical details

This vulnerability affects all TeamCity installations where the firewall permits inbound connections on ports other than the standard HTTP/HTTPS one used by TeamCity, or where build agents are running on the same host as the TeamCity server.

Exploitation of this vulnerability requires access to a TeamCity account, including a standard user account or the guest user account (if guest access is enabled). If exploited, it could allow an authenticated user to expose some parts of the TeamCity server API to unauthorized access.

As a general best practice, we strongly recommend restricting inbound network access to only required ports.

TeamCity servers should also run on dedicated hosts separate from build agents, as described in our documentation.

Support

If you have any questions regarding this issue or encounter problems upgrading, please get in touch with the TeamCity Support team by submitting a ticket.

Rider 2026.2 Early Access Program Begins With Performance Improvements

The Early Access Program (EAP) for Rider 2026.2 is now open, and the first preview build for the upcoming major release is already out. 

There are several ways for you to get your hands on the first preview build:

  • Download and install it from our website.
  • Get it via the Toolbox App.
  • Install this snap package from the SnapCraft store if you’re using a compatible Linux distribution.
Download Rider 2026.2 EAP 1

A reminder of what the EAP is all about

The Early Access Program is a long-standing tradition that gives our users early access to the new features we’re preparing. By participating, you get a first look at what’s coming and a chance to help shape the final release through your feedback.

EAP builds are free to use, though they may be less stable than the final release versions. You can learn more about the EAP and why you might want to participate here.

And now on to Rider 2026.2 EAP 1 release highlights.

Major Roslyn performance improvements with faster branch switching

Rider 2026.2 EAP 1 introduces a significant round of performance improvements for Roslyn integration, with a focus on one of the most painful scenarios in large solutions: switching branches.

Branch switching is one of those everyday actions that should feel uneventful. You change branches, Rider updates the solution model, Roslyn catches up, and you keep working. But in large solutions, especially those with many projects or target frameworks, this process could become noticeably slow. In some cases, it could also cause freezes or Roslyn crashes.

Rider 2026.2 EAP 1 addresses this with a set of targeted improvements to how Rider communicates project model changes to Roslyn. We’ve reduced the number of requests, added batching, cut down the amount of transferred data, and fixed a hang caused by passing non-existent files to Roslyn.

The result is a much smoother experience when switching branches, especially in large or complex solutions. In typical large-project scenarios, branch switching is now 2–3x faster

In some of the worst cases we tested, the improvement is much more dramatic. One BenchmarkDotNet scenario (~25 projects included) improved from 8 minutes to 5 seconds, making branch switching in that case nearly 100x faster.

This work also fixes a number of Roslyn-related issues around project references, .editorconfig handling, available analyzers, and target framework changes.

Game dev goodness

Unity 

For Unity developers, we’ve significantly reworked how Rider handles asmdef references. This should improve how Rider understands Unity projects that use assembly definition files and make project model updates more reliable.

Godot 

Rider 2026.2 EAP 1 brings a set of fixes and quality improvements for GDScript support, addressing several issues that could make the editing experience less smooth than expected.

Spellchecking is now available in GDScript files, helping you catch typos directly in the editor. 

Azure Functions support is moving into Rider

We’re migrating Azure Functions features for local development from the separate Azure Toolkit plugin into JetBrains Rider itself.

This means you’ll be able to develop Azure Functions locally without installing any additional plugins. Most of the existing functionality has already been moved, including project and trigger creation, running, debugging, Azurite integration, and more. A few smaller features are still pending and will be added in upcoming EAP builds.

We’ve also added the ability to create an Azure Functions trigger from the project creation dialog. In addition, Azure Functions projects can now be debugged inside a Docker container. Previously, this Docker debugging workflow was available only for regular .NET projects.

Aspire improvements

Rider 2026.2 EAP 1 also includes several updates for Aspire.

We now support file-based AppHosts for Aspire projects. Dev certificate validation for Aspire apps has also been improved.

There are also improvements to how AppHost.cs is displayed in the editor. Rider now shows the status of each resource, such as whether it’s running or stopped, and lets you execute resource commands directly from the gutter.


For the full list of changes included in this build, please see our release notes.

We encourage you to download the EAP build, give these new features a try, and share your feedback. The Early Access Program is a collaborative effort, and your input plays a vital role in making Rider the best it can be.

Download Rider 2026.2 EAP 1

Thank you for being part of our EAP community, and we look forward to hearing what you think!

The GoLand 2026.2 Early Access Program Has Started

The Early Access Program (EAP) for GoLand 2026.2 is now open. It’s a great opportunity to try upcoming features for free and help shape the product.

EAP builds give you early access to what we’re working on, so you can test new functionality in your real workflows and share feedback with the GoLand team. Your input directly influences what makes it into the final release.

If you are new to the EAP, here is how it works:

  • The EAP allows you to try new features before the final release.
  • New EAP builds are released regularly during the cycle.
    • Builds are still in development and may be unstable.
    • Builds are free for the whole EAP cycle until Beta.
  • Your feedback helps us improve the product.
  • During the EAP, we will also share a survey. Participating gives you a chance to receive a free GoLand subscription or an Amazon Gift Card.

In this release cycle, we’re focusing on performance insights, memory optimization, and smoother project onboarding. The goal is simple. You should be able to understand your Go program’s behavior and optimize its performance without leaving the IDE.

You can download the first EAP build from the Toolbox App, from our website, or by updating from inside the IDE.

Download GoLand 2026.2 EAP

Disclaimer

We continue to work on performance tooling, analysis accuracy, and workflow improvements throughout the EAP cycle.

You can explore the full list of tasks and features we are currently working on in our roadmap.

This roadmap reflects our current priorities. Plans can change as we collect feedback and validate ideas during the EAP.

What we’re planning for GoLand 2026.2

This EAP cycle introduces a new set of tools for performance analysis and several improvements to everyday workflows.

Get insight into performance without leaving the IDE

Evaluate your program from the Go Performance Optimization tool window

You can now access all performance tools in one place. The new Go Performance Optimization tool window brings together profiling, escape analysis, and struct optimization.

You no longer need to switch between different tools or workflows. You can analyze CPU usage, memory behavior, and allocation patterns from a single UI.

Profile any Go application with pprof

You can now run profiling for both tests and regular run configurations.

The profiler is based on pprof and integrates directly into the IDE. It helps you answer key questions about your program:

  • Where does the program spend CPU time?
  • How much memory does it allocate and retain?
  • Which parts of the code create excessive allocations?
  • What goroutines are running and where are they blocked?

A variety of profiling types are included:

  • The CPU profiler shows where your program spends CPU time during active execution. It samples running goroutines and helps you find CPU-intensive code paths.
  • The Heap and Allocs profilers track memory usage and allocation patterns. Both collect the same allocation data but use different default views. The Heap profile shows memory that is currently in use, while the Allocs profile shows total memory allocated over time, including memory that has already been freed.
  • The Goroutine profiler shows all current goroutines and their stack traces. It helps you understand what goroutines are doing and identify issues such as leaks or deadlocks.
  • The Block profiler shows where goroutines are blocked by synchronization operations, such as channel operations or locks. It helps you find delays caused by code that is waiting instead of being executed.
  • The Mutex profiler shows lock contention between goroutines. It helps you identify where goroutines block each other when accessing shared data.

There are a few ways to start profiling:

  • From the run configuration selector on the toolbar.
  • From the gutter next to the main function or a test.
  • From the Go Performance Optimization tool window.
  • From the Run tool window by using Rerun with Profiler.

Detect unnecessary heap allocations with escape analysis

Escape analysis helps you understand when values move from the stack to the heap.

A stack allocation is fast and short-lived. A heap allocation is slower and requires garbage collection. When values escape to the heap unnecessarily, they increase memory usage and reduce performance.

GoLand highlights these cases directly in the editor. You can see:

  • Which variables escape.
  • Why they escape.
  • How the data flows through your code.

Optimize struct layouts for better memory usage

GoLand now helps you improve the layout of your structs, allowing you to conserve memory.

In Go, field order affects memory alignment. Poor alignment introduces padding and increases the size of a struct.

For example:

type Inefficient struct {
    A byte  // 1 byte
    B int32 // 4 bytes
    C byte  // 1 byte
}

The struct is laid out in memory as follows. Field A occupies 1 byte. The next 3 bytes are padding to align field B to a 4-byte boundary. Field B then occupies 4 bytes, while field C occupies 1 byte. After that, another 3 bytes of padding are added so that the total struct size matches the largest alignment requirement. As a result, the struct takes 12 bytes in total, even though the fields themselves require only 6 bytes.

The optimal layout of fields in the struct goes as follows:

type Efficient struct {
    B int32 // 4 bytes
    A byte  // 1 byte
    C byte  // 1 byte
}

GoLand detects suboptimal layouts and suggests a quick-fix. This helps you reduce the memory footprint without changing program behavior.

See CPU and memory usage in real time

You can now monitor CPU and memory usage while your program runs.

Live charts are available in:

  • The Run tool window.
  • The Go Performance Optimization tool window.

This gives you immediate feedback. You can see how changes in code affect resource usage without running a full profiling session.

Start projects faster with automatic run/debug configurations

GoLand can now detect main packages in your project and create run/debug configurations automatically.

When you open a project, the IDE:

  • Scans for executable entry points.
  • Creates run configurations, reducing manual setup.

Share your feedback

Your feedback shapes GoLand.

Try the new features in your projects and tell us what works and what doesn’t. Report issues and vote for features in our issue tracker.

Happy coding,

The GoLand team

My SonarQube scans were crawling; turns out Docker on WSL 2 only had 1 CPU and 1 GB of RAM

If you’re running SonarQube (or anything CPU- and memory-hungry) inside Docker Desktop on Windows and the scans feel like they’re running through molasses, your .wslconfig is probably the first place to look. That was my story this week, and the fix was satisfyingly small.

Here’s the whole journey, end to end, including the gotchas.

Symptom: scans stall, fans spin, nothing finishes

I was running a SonarQube scan on a mid-sized codebase inside a Docker container on Windows. The scan would chug along for a few minutes, then either crawl or stall outright. Nothing in the SonarQube logs screamed “out of memory” but the host felt fine, so it wasn’t a Windows-side resource problem.

Diagnosis: ask Docker what it actually has

Docker Desktop on Windows runs everything inside a WSL 2 VM called docker-desktop. Whatever resources you’ve given that VM is the ceiling for every container you run. You can ask it directly:

wsl -d docker-desktop -- nproc
wsl -d docker-desktop -- free -h

The output I got:

1
              total        used        free      shared  buff/cache   available
Mem:           907M       ...
Swap:          4.0G        ...

One CPU. Less than 1 GB of RAM. No wonder. SonarQube’s analyzer plus the Java heap alone wants more than that. Anything that hit a swap-heavy phase was going to thrash.

Fix: .wslconfig

WSL 2 reads global VM settings from a single file at %UserProfile%.wslconfig. It doesn’t exist by default you create it. Mine ended up looking like this:

[wsl2]
memory=8GB
processors=4
swap=8GB

Three lines. That’s the whole fix.

A few notes on syntax that tripped me up briefly:

  • The section header must be [wsl2] — lowercase, exact.
  • memory and swap are “size” values. You must include the unit (GB or MB). Per Microsoft’s spec: “Entries with the size value default to B (bytes), and the unit is omissible.” Which is a polite way of saying that swap=4 means 4 bytes, not 4 GB. Don’t do what I almost did.
  • No spaces around the =.
  • processors is an integer.

Apply it

Save the file to C:Users<you>.wslconfig, then in PowerShell:

wsl --shutdown

Quit Docker Desktop from the tray icon and start it again. That’s it.

The 8-second rule. Microsoft’s docs are explicit about this: even after wsl --shutdown, give it a moment for the VM to fully stop before relaunching. You can verify with:

wsl --list --running

If nothing’s listed, you’re safe to start back up.

Verify

After Docker came back up:

PS> wsl -d docker-desktop -- nproc
4

PS> wsl -d docker-desktop -- free -h
              total        used        free      shared  buff/cache   available
Mem:           7.8G      485.8M        6.6G        3.1M      697.7M        7.1G
Swap:          8.0G           0        8.0G

4 CPUs, 7.8 GB RAM (the 8 GB cap, minus a sliver of overhead), 8 GB of swap. Exactly what the config asked for.

The SonarQube scan that had been crawling finished in a sensible amount of time on the next run.

Optional extras worth knowing about

If you want to go further, the [wsl2] section supports a bunch of other documented keys. The two I’d actually consider for a SonarQube-style workload:

[wsl2]
memory=8GB
processors=4
swap=8GB
vmIdleTimeout=60000

[experimental]
autoMemoryReclaim=gradual
sparseVhd=true
  • vmIdleTimeout shuts the VM down after 60 s of idle, so it’s not hoarding 8 GB while you’re not using Docker.
  • autoMemoryReclaim=gradual returns idle memory back to Windows slowly instead of dropping caches abruptly nicer for scans that have memory spikes.
  • sparseVhd=true makes new WSL VHDs sparse, so disk usage actually shrinks when you free space inside the VM.

Gotchas I want to save you from

A few things I learned the hard way or almost did:

Don’t allocate everything. Setting memory to your entire physical RAM, or processors to your full core count, will starve Windows itself. Aim for roughly 75% of physical RAM and leave a couple of cores for the host.

pageReporting isn’t a real key. I saw it referenced in older blog posts. It’s not in the current Microsoft spec – WSL silently ignores unknown keys, so the file looks valid but the setting does nothing. Stick to the documented list.

Path values need escaped backslashes. If you set swapFile or kernel, write it as C:\Temp\swap.vhdx, not C:Tempswap.vhdx.

Docker Desktop’s docker-desktop VM is what you care about, not a user distro. When validating, run wsl -d docker-desktop -- free -h, not wsl -- free -h. The latter checks whatever default distro you have installed (Ubuntu or similar), which is a separate VM.

TL;DR

If Docker on Windows feels sluggish, check what WSL 2 is actually giving it:

wsl -d docker-desktop -- nproc
wsl -d docker-desktop -- free -h

If those numbers are tiny, drop a .wslconfig at %UserProfile%.wslconfig:

[wsl2]
memory=8GB
processors=4
swap=8GB

wsl --shutdown, restart Docker Desktop, verify, and get on with your day.

Reference: Microsoft’s advanced settings configuration in WSL.

[Tutorial] Building a Shielded Token dApp on Midnight: From Compact Contract to React UI

📁 Full source code: midnight-apps/shielded-token

Target audience: Developers

This tutorial walks you through building a complete shielded token DApp on the Midnight network. You will deploy a Compact smart contract, implement operations such as minting, transferring, and burning tokens, generate zero-knowledge proofs, and build a React frontend that lets users interact with shielded tokens in the browser.

Shielded tokens differ from unshielded tokens in that all balances and amounts remain hidden from on-chain explorers. Only the wallet owner can decrypt their balances locally. The smart contract proves correctness via zero-knowledge proofs without revealing any sensitive values. Public state variables such as totalSupply and totalBurned track aggregate metrics, while individual coin values, recipients, and the transaction graph remain private.

Prerequisites

  • Node.js installed (v20+)
  • A Midnight Wallet (1AM or Lace)
  • Some Preprod faucet NIGHT tokens
  • A package.json with the needed packages
    • @midnight-ntwrk/compact-runtime
    • @midnight-ntwrk/dapp-connector-api
    • @midnight-ntwrk/ledger-v8
    • @midnight-ntwrk/midnight-js-contracts
    • @midnight-ntwrk/midnight-js-dapp-connector-proof-provider
    • @midnight-ntwrk/midnight-js-fetch-zk-config-provider
    • @midnight-ntwrk/midnight-js-indexer-public-data-provider
    • @midnight-ntwrk/midnight-js-level-private-state-provider
    • @midnight-ntwrk/midnight-js-network-id
    • @midnight-ntwrk/midnight-js-types
    • @midnight-ntwrk/wallet-sdk-address-format
    • react, react-dom, react-router-dom, zustand, semver

1. Building and compiling the smart contract

The smart contract for shielded tokens resides in contracts/Token.compact. It manages public counters such as totalSupply and totalBurned, and uses the Zswap shielded token primitives to create, transfer, and destroy private coins.

Public ledger state

Create these two essential public counters to track the token lifecycle:


// --- Public ledger state ---

export ledger totalSupply: Uint<64>;
export ledger totalBurned: Uint<128>;

These two are public: they do not contain any sensitive or private information. They only track totalSupply and totalBurned; the ownership of the shielded tokens remains private.

Witnesses for private data

The Compact smart contract for shielded tokens requires a source of randomness for coin nonces. Each shielded coin needs to have a unique nonce so its commitment is distinct:


// --- Witnesses for private/off-chain data ---

witness localNonce(): Bytes<32>;

For every mint, a fresh random 32-byte nonce is generated. It lives in the TypeScript layer and is bound into the zero-knowledge proof generation.

Minting a shielded token

The first circuit is createShieldedToken. It mints a new shielded token with a unique nonce and sends it to a recipient:


// --- Minting to self ---

export circuit createShieldedToken(
    amount: Uint<64>,
    recipient: Either<ZswapCoinPublicKey, ContractAddress>
): ShieldedCoinInfo {
    const domain = pad(32, "shielded:token");
    const nonce = localNonce();
    const coin = mintShieldedToken(
        disclose(domain),
        disclose(amount),
        disclose(nonce),
        disclose(recipient)
    );
    totalSupply = (totalSupply + disclose(amount)) as Uint<64>;
    return coin;
}

mintShieldedToken is a ledger primitive. It creates a new shielded token commitment. The domain separates this token from others on the network, and the nonce ensures its uniqueness.

Note: disclose() is required because the ledger needs to see the recipient on-chain in order to route the output correctly. Only the recipient can decrypt the actual amount.

The atomic mint-and-send pattern

mintAndSend is the most important circuit in this smart contract. It atomically mints a coin and forwards it to a recipient in one transaction without any Merkle qualification needed:


// --- Minting and sending ---

export circuit mintAndSend(
    amount: Uint<64>,
    recipient: Either<ZswapCoinPublicKey, ContractAddress>
): ShieldedSendResult {
    const domain = pad(32, "shielded:token");
    const nonce = localNonce();

    // Mint to contract first
    const coin = mintShieldedToken(
        disclose(domain),
        disclose(amount),
        disclose(nonce),
        right<ZswapCoinPublicKey, ContractAddress>(kernel.self())
    );

    // Immediately forward — no Merkle qualification needed
    const result = sendImmediateShielded(
        disclose(coin),
        disclose(recipient),
        disclose(amount) as Uint<128>
    );

    totalSupply = (totalSupply + disclose(amount)) as Uint<64>;
    return result;
}

sendImmediateShielded spends a token that was created in the same transaction. The kernel pairs the mint and spend internally using mt_index: 0, meaning no on-chain Merkle path lookup is needed.

The ShieldedSendResult contains two fields:

  • sent: the coin that was sent to the recipient
  • change: a Maybe<ShieldedCoinInfo> containing any remainder

The Merkle tree constraint

To understand why tokens need to be committed to the on-chain Merkle tree: freshly minted shielded tokens are not immediately spendable in an independent transaction. Thus the exported circuit transferShielded requires QualifiedShieldedCoinInfo (which includes mt_index), while the mintAndSend circuit bypasses this by using sendImmediateShielded.

export circuit transferShielded(
    coin: QualifiedShieldedCoinInfo,
    recipient: Either<ZswapCoinPublicKey, ContractAddress>,
    amount: Uint<128>
): ShieldedSendResult {
    const result = sendShielded(disclose(coin), disclose(recipient), disclose(amount));
    return result;
}

sendShielded requires a Merkle inclusion proof from coin.mt_index to the current Zswap root. The prover must have this path, and the verifier checks it against the on-chain root. If the wallet’s local Zswap state is even slightly out of sync with the verifier’s expected root, then the proof fails.

This is a trade-off to be considered carefully depending on your use case(s):

Primitive Requires mt_index Use case
sendImmediateShielded No Same-tx mint/send or deposit/burn
sendShielded Yes Spending previously committed coins

Burning shielded tokens

The depositAndBurn circuit burns the received coin in the same transaction:

export circuit depositAndBurn(
    coin: ShieldedCoinInfo,
    amount: Uint<128>
): ShieldedSendResult {
    receiveShielded(disclose(coin));
    const burnAddr = shieldedBurnAddress();
    const result = sendImmediateShielded(
        disclose(coin),
        burnAddr,
        disclose(amount)
    );
    totalBurned = (totalBurned + disclose(amount)) as Uint<128>;
    return result;
}

receiveShielded declares that the smart contract receives the coin. The wallet’s balancer adds a matching input automatically. shieldedBurnAddress() is a ledger constant on the Midnight network; coins sent there are permanently removed from the circulating supply.

Important Caveat: sendImmediateShielded sends change to kernel.self() (the smart contract). Thus a partial burn leaves a contract-owned shielded output that is not tracked elsewhere. The UI enforces full burn by default to avoid this.

Additional circuits

nextNonce is used to derive a deterministic nonce sequence:

export circuit nextNonce(index: Uint<128>, currentNonce: Bytes<32>): Bytes<32> {
    return evolveNonce(disclose(index), disclose(currentNonce));
}

evolveNonce is used to derive the next nonce from a counter index and current nonce; it’s useful for applications requiring deterministic nonce sequences.

View the full contract in Token.compact.

Compiling the compact smart contract

Install the Compact compiler:

curl --proto '=https' --tlsv1.2 -LsSf 
  https://github.com/midnightntwrk/compact/releases/latest/download/compact-installer.sh | sh

Then compile:

compact compile contracts/Token.compact src/contracts

This will generate files and folders such as keys and zkir, all of which are essential for deploying and interacting with the smart contract later.

Note: You can skip this step if you cloned the repo, as compiled artifacts are already included. However, if you recompile, you will not be able to use the deployed smart contract because the old verification keys will no longer match.

2. React UI implementation

Using the smart contract-generated artifacts in src/contracts from the frontend involves a few steps:

Wallet provider setup

Midnight wallets inject a global window.midnight object before page load.

Start with the constants:

// src/hooks/wallet.constants.ts
export const COMPATIBLE_CONNECTOR_API_VERSION = '4.x';
export const NETWORK_ID = 'preprod';

Note: COMPATIBLE_CONNECTOR_API_VERSION is '4.x', not '^4.0.0'. The '4.x' semver range accepts any 4.x.y version the wallet reports.

The detection function enumerates window.midnight, validates each entry, and filters by version.

// src/hooks/useWallet.ts
export function getCompatibleWallets(): InitialAPI[] {
  if (!window.midnight) return [];

  return Object.values(window.midnight).filter(
    (wallet): wallet is InitialAPI =>
      !!wallet &&
      typeof wallet === 'object' &&
      'apiVersion' in wallet &&
      semver.satisfies(wallet.apiVersion, COMPATIBLE_CONNECTOR_API_VERSION)
  );
}

When wallet.connect(networkId) is called, it triggers the wallet extension connection flow.

// src/hooks/useWallet.ts
connect: async (networkId = NETWORK_ID) => {
  const { wallet } = get();
  if (!wallet) {
    set({ error: 'No wallet selected' });
    return;
  }

  set({ isConnecting: true, error: null });

  try {
    const connectedApi = await wallet.connect(networkId);
    const status = await connectedApi.getConnectionStatus();

    if (status.status !== 'connected') {
      throw new Error(`Wallet status: ${status.status}`);
    }

    const config = await connectedApi.getConfiguration();
    const shielded = await connectedApi.getShieldedAddresses();
    const unshielded = await connectedApi.getUnshieldedAddress();
    const dustAddr = await connectedApi.getDustAddress();

    set({
      connectedApi,
      isConnected: true,
      config,
      addresses: {
        shieldedAddress: shielded.shieldedAddress,
        shieldedCoinPublicKey: shielded.shieldedCoinPublicKey,
        shieldedEncryptionPublicKey: shielded.shieldedEncryptionPublicKey,
        unshieldedAddress: unshielded.unshieldedAddress,
        dustAddress: dustAddr.dustAddress,
      },
      balances: {
        shielded: {},
        unshielded: {},
        dust: { balance: 0n, cap: 0n },
      },
    });

    localStorage.setItem('midnight_last_wallet', wallet.rdns);
  } catch (err) {
    set({
      error: err instanceof Error ? err.message : 'Connection failed',
      isConnected: false,
      connectedApi: null,
    });
  } finally {
    set({ isConnecting: false });
  }
},

Or if you want, you can use a starter I built, dapp-connect.

First, start by cloning the repository.

git clone https://github.com/0xfdbu/midnight-apps.git

Run the starter and install dependencies.

cd midnight-apps/dapp-connect
npm install
npm run dev

Building the providers and the TypeScript API

Before continuing, you need a helper function to build the providers.

// src/hooks/wallet/services/providers.ts

import type { ConnectedAPI } from '@midnight-ntwrk/dapp-connector-api';
import type { MidnightProviders } from '@midnight-ntwrk/midnight-js-types';
import { INDEXER_HTTP, INDEXER_WS, CONTRACT_PATH, PRIVATE_STATE_PASSWORD } from '../wallet.constants';
import { indexerPublicDataProvider } from '@midnight-ntwrk/midnight-js-indexer-public-data-provider';
import { FetchZkConfigProvider } from '@midnight-ntwrk/midnight-js-fetch-zk-config-provider';
import type { ZKConfigProvider } from '@midnight-ntwrk/midnight-js-types';
import { dappConnectorProofProvider } from '@midnight-ntwrk/midnight-js-dapp-connector-proof-provider';
import { levelPrivateStateProvider } from '@midnight-ntwrk/midnight-js-level-private-state-provider';
import { toHex, fromHex } from '@midnight-ntwrk/midnight-js-utils';
import { Transaction, CostModel } from '@midnight-ntwrk/ledger-v8';

Provider builder function:

export async function buildProviders(
  connectedApi: ConnectedAPI,
  coinPublicKey: string,
  encryptionPublicKey: string,
  contractAddress?: string,
  existingPrivateStateProvider?: any
): Promise<MidnightProviders> {
  const fetchProvider = new FetchZkConfigProvider(
    `${window.location.origin}${CONTRACT_PATH}`,
    fetch.bind(window)
  );
  const zkConfigProvider = new ArtifactValidatingProvider(fetchProvider);

  const privateStateProvider = existingPrivateStateProvider || levelPrivateStateProvider({
    accountId: coinPublicKey,
    privateStoragePasswordProvider: () => PRIVATE_STATE_PASSWORD,
  });

  if (contractAddress) {
    privateStateProvider.setContractAddress(contractAddress);
  }

  return {
    privateStateProvider,
    publicDataProvider: indexerPublicDataProvider(INDEXER_HTTP, INDEXER_WS),
    zkConfigProvider,
    proofProvider: await dappConnectorProofProvider(connectedApi, zkConfigProvider, CostModel.initialCostModel()),
    walletProvider: {
      getCoinPublicKey: () => coinPublicKey,
      getEncryptionPublicKey: () => encryptionPublicKey,
      async balanceTx(tx: any, _ttl?: Date): Promise<any> {
        const serializedTx = toHex(tx.serialize());
        const received = await connectedApi.balanceUnsealedTransaction(serializedTx);
        return Transaction.deserialize('signature', 'proof', 'binding', fromHex(received.tx));
      },
    },
    midnightProvider: {
      async submitTx(tx: any): Promise<string> {
        await connectedApi.submitTransaction(toHex(tx.serialize()));
        const txIdentifiers = (tx as any).identifiers();
        return txIdentifiers?.[0] ?? '';
      },
    },
  };
}

Now proceed to create the hook for the TypeScript API. These are some of the essential imports for the API

// src/hooks/wallet/services/api.ts

import type { ConnectedAPI } from '@midnight-ntwrk/dapp-connector-api';
import { indexerPublicDataProvider } from '@midnight-ntwrk/midnight-js-indexer-public-data-provider';
import { buildProviders } from './providers';
import { getContract, createInitialPrivateState } from './contract';
import { INDEXER_HTTP, INDEXER_WS, CONTRACT_PATH, PRIVATE_STATE_ID, PRIVATE_STATE_PASSWORD } from '../wallet.constants';
import { levelPrivateStateProvider } from '@midnight-ntwrk/midnight-js-level-private-state-provider';
import { CompiledContract } from '@midnight-ntwrk/compact-js';

Deploying the smart contract

deployTokenContract builds a CompiledContract instance, binds the localNonce witness, attaches the compiled ZK artifacts, and then calls deployContract with the providers:

// src/hooks/wallet/services/api.ts

export async function deployTokenContract(
  connectedApi: ConnectedAPI,
  coinPublicKey: string,
  encryptionPublicKey: string
): Promise<string> {
  const { deployContract } = await import('@midnight-ntwrk/midnight-js-contracts');
  const privateStateProvider = await ensurePrivateState(coinPublicKey, 'tmp-deploy');
  const providers = await buildProviders(connectedApi, coinPublicKey, encryptionPublicKey, undefined, privateStateProvider);

  const contractModule = await import(`${CONTRACT_PATH}/contract/index.js`);
  const cc: any = CompiledContract.make('shielded-token', contractModule.Contract);
  const withWitnesses = (CompiledContract as any).withWitnesses({
    localNonce: ({ privateState }: any): [any, Uint8Array] => {
      const nonce = crypto.getRandomValues(new Uint8Array(32));
      return [privateState, nonce];
    },
  });
  const withAssets = (CompiledContract as any).withCompiledFileAssets(CONTRACT_PATH);
  const compiledContract = withWitnesses(withAssets(cc));

  const deployed = await deployContract(providers as any, {
    compiledContract,
    privateStateId: PRIVATE_STATE_ID,
    initialPrivateState: createInitialPrivateState(),
    args: [],
  } as any);

  const address = deployed.deployTxData.public.contractAddress;
  localStorage.setItem('shielded_token_contract', address);
  return address;
}

Wire deployTokenContract into the frontend

// src/pages/Deploy.tsx
// Other imports
import { useWalletStore } from '../hooks/useWallet';
import { deployTokenContract } from '../hooks/wallet/services/api';

  const handleDeploy = async () => {
    if (!connectedApi || !addresses?.shieldedCoinPublicKey || !addresses?.shieldedEncryptionPublicKey) {
      setError('Wallet not fully connected');
      return;
    }
    setStatus('pending');
    setError(null);

    try {
      const addr = await deployTokenContract(
        connectedApi,
        addresses.shieldedCoinPublicKey,
        addresses.shieldedEncryptionPublicKey
      );
      setContractAddress(addr);
      setStatus('success');
    } catch (err) {
      console.error('[Deploy] Error:', err);
      setError(err instanceof Error ? err.message : 'Deployment failed');
      setStatus('error');
    }
  };

The smart contract address is then saved to localStorage.

Note: The same API pattern used in deployTokenContract will be used for calling the compiled circuits. View full API api.ts

Minting tokens

The Mint page has two modes: Mint to Self and Mint & Send.

Mint to Self calls createShieldedToken and sends the minted coin into the user’s shielded coin public key:

const selfRecipient = {
  is_left: true,
  left: { bytes: parseKeyBytes(addresses.shieldedCoinPublicKey) },
  right: { bytes: ZERO_BYTES32 },
};

const result = await callCreateShieldedToken(
  connectedApi,
  addresses.shieldedCoinPublicKey,
  addresses.shieldedEncryptionPublicKey,
  value,
  selfRecipient
);

When a mint is successful, Nonce, Color, and Value are stored in localStorage so they can be referenced later during the burn phase.

Mint & Send calls mintAndSend and sends the freshly minted coins to the address the user entered:

const recipientBytes = parseShieldedAddress(recipient);
const recipientEither = {
  is_left: true,
  left: { bytes: recipientBytes },
  right: { bytes: ZERO_BYTES32 },
};

const result = await callMintAndSend(
  connectedApi,
  addresses.shieldedCoinPublicKey,
  addresses.shieldedEncryptionPublicKey,
  value,
  recipientEither
);

A small utility function, parseShieldedAddress, extracts the 32 bytes from the user-typed shielded address

/**
 * Parse a Bech32m shielded address (e.g. `m1q...`) and extract the 32-byte
 * shielded coin public key that the smart contract expects as a recipient.
 */
export function parseShieldedAddress(address: string): Uint8Array {
  try {
    const parsed = MidnightBech32m.parse(address);
    const shieldedAddr = ShieldedAddress.codec.decode(getNetworkId(), parsed);
    return new Uint8Array(shieldedAddr.coinPublicKey.data);
  } catch {
    throw new Error('Invalid shielded address. Paste a Bech32m address starting with the network prefix.');
  }
}

When a mint is successful, Nonce, Color, and Value are stored in localStorage so they can be referenced later during the burn phase. This means users won’t need to enter the values manually when they are already stored in localStorage.

Note: The createShieldedToken circuit returns ShieldedCoinInfo, while the mintAndSend circuit returns a ShieldedSendResult containing sent and change. For mintAndSend with exact amounts, change is typically None.

Coin storage

Shielded coins are different from unshielded ones: they are private, and the wallet does not expose an API to enumerate them with their nonces, so the DApp stores mint results in localStorage.

export interface StoredCoin {
  id: string;
  nonce: string;
  color: string;
  value: string;
  source: 'mint' | 'mintAndSend' | 'change';
  txId: string;
  createdAt: string;
}

Mint page writes using saveStoredCoins and burn page reads using getStoredCoins. Sending tokens from wallet does not require reading or writing.

Sending tokens

The send page uses the wallet’s native makeTransfer for shielded transfers. The wallet handles everything, including proving; however, you still need to call submitTransaction to broadcast it:

const desiredOutput = {
  kind: 'shielded' as const,
  type: selectedToken,
  value,
  recipient: recipientClean,
};

const result = await connectedApi.makeTransfer([desiredOutput]);
if (result.tx) {
  await connectedApi.submitTransaction(result.tx);
}

makeTransfer is the most convenient way of sending shielded tokens using the DApp Connector API.

Burning tokens

The Burn page uses the depositAndBurn circuit to destroy stored coins

const coin = {
  nonce: hexToUint8Array(selectedCoin.nonce),
  color: hexToUint8Array(selectedCoin.color),
  value: BigInt(selectedCoin.value),
};

const result = await callDepositAndBurn(
  connectedApi,
  addresses.shieldedCoinPublicKey,
  addresses.shieldedEncryptionPublicKey,
  coin,
  BigInt(amount)
);

After burning, the coin is removed from localStorage.

const updatedCoins = getStoredCoins().filter((c) => c.id !== selectedCoin.id);
saveStoredCoins(updatedCoins);

Caveat: sendImmediateShielded sends change to kernel.self() (smart contract). Therefore, a partial burn leaves a contract-owned shielded output that is not tracked anywhere, which is why the UI enforces a full burn by default to avoid this.

Home and balance display

The main dashboard displays 3 types of data:

Shielded balance(s) – it displays the combined balance of tokens, shieldedBalanceTotal, that enumerates across all balances. It also calls connectedApi.getShieldedBalances() internally and refreshes every 15 seconds:

const { balances, loadWalletState } = useWalletStore();

useEffect(() => {
  if (!isConnected) return;
  loadWalletState();
  const id = setInterval(() => loadWalletState(), 15_000);
  return () => clearInterval(id);
}, [isConnected, loadWalletState]);

const shieldedBalanceTotal = (() => {
  if (!balances?.shielded) return null;
  const entries = Object.entries(balances.shielded);
  if (entries.length === 0) return 0n;
  return entries.reduce((sum, [, v]) => sum + (v ?? 0n), 0n);
})();

Contract states like totalSupply and totalBurned are fetched via the getContractState helper, which uses ledger() to deserialize the raw bytes into readable data.

const [stats, setStats] = useState<{ totalSupply: bigint; totalBurned: bigint } | null>(null);

useEffect(() => {
  if (contractAddress) {
    getContractState(contractAddress).then(setStats);
  }
}, [contractAddress]);

3. The mint-and-send atomic pattern

The mintAndSend circuit pattern solves a critical problem in shielded token design.

The main issue is that a freshly minted shielded coin is not immediately spendable via sendShielded when it has not yet been committed to the Merkle tree. If you mint a coin in transaction X, you cannot spend it in transaction X+1 without waiting for it to be included in the Merkle tree and obtaining its mt_index.

sendImmediateShielded is different, it bypasses the Merkle qualification by using mt_index: 0.

The circuit pattern:

  1. mintShieldedToken(..., kernel.self()) — mint shielded coins to the kernel (smart contract)
  2. sendImmediateShielded(coin, recipient, amount) — forward to the recipient

Either both steps succeed, or the entire transaction fails. The recipient receives a fully qualified shielded coin that is spendable in future transactions with sendShielded once it is committed to the Merkle tree.

depositAndBurn circuit pattern:

  1. receiveShielded(coin) — deposits user coins into the transaction
  2. sendImmediateShielded(coin, burnAddr, amount) — burn it immediately in the same transaction

This atomic pattern makes it possible to burn a shielded coin through the smart contract without using sendShielded with mt_index, which requires the commitment of the coin to the Merkle tree.

4. Key architectural decisions

Decision Choice Rationale
Proving strategy dappConnectorProofProvider (wallet-backed) Built-in ledger circuits like output are not generated by the Compact compiler; the wallet has them
Send path Wallet makeTransfer for transfers, smart contract depositAndBurn for burns makeTransfer handles change correctly; smart contract burns update totalBurned
Coin storage localStorage via coinStore.ts The DApp Connector API does not expose individual coin nonces; storing mint results enables smart contract burns
Burn default Full burn Partial burns via depositAndBurn lock change in the smart contract
Network Preprod Testnet with faucet support

Conclusion

You have now built a complete shielded token DApp that demonstrates the ability to mint privacy-preserving tokens with mintShieldedToken, atomically forward freshly minted coins with sendImmediateShielded, burn tokens with receiveShielded + sendImmediateShielded, and finally build a React frontend with deploy, mint, send, burn, and balance display.

It is important to distinguish between sendImmediateShielded (bypasses Merkle path before spending) and sendShielded (requires mt_index). Understanding this correctly determines whether the coins you minted are immediately spendable or locked.

Next steps

  • Check the full repository source code on GitHub
  • Read the Midnight Compact language docs
  • Experiment with transferShielded by storing mt_index for committed coins
  • Add admin authentication to restrict minting privileges

Troubleshooting

Symptom Cause Fix
Shielded balance shows 0 after mint Wallet hasn’t synced the mint block yet Wait 15s (auto-refresh) or open wallet extension to trigger sync
Burn page empty dropdown Burn only shows DApp-minted coins, not wallet-received coins Use Send page (makeTransfer to burn address) for wallet balance burns
Wallet disconnects during proving ZK proof generation timed out in wallet popup Reconnect wallet, ensure extension is active and unlocked
"Invalid shielded address" on Mint & Send Recipient field expects Bech32m, not raw hex Use parseShieldedAddress() to decode the wallet’s shielded address
Invalid Transaction: Custom error: 138 on burn 1AM wallet dust sponsoring interferes with contract call balancing Turn off dust sponsoring in 1AM wallet settings
"No compatible wallet found" Extension API version outside 4.x Update Lace or 1AM to latest version

How Knowledge-Based AI Works — From Rules to Inference

Before AI learned from massive datasets, many systems worked with explicit knowledge.

Facts.

Rules.

Inference.

That is the core of Knowledge-Based AI.

Core Idea

Knowledge-Based AI stores knowledge in a structured form.

Then it uses rules to derive new conclusions.

The system does not “learn” from data in the modern deep learning sense.

It reasons over what it already knows.

That makes the structure very different from machine learning.

The Key Structure

A simple Knowledge-Based AI system looks like this:

Knowledge Base → Rules → Inference Engine → Conclusion

Or more compact:

Knowledge-Based AI = Facts + Rules + Inference

The knowledge base stores information.

The rule system defines how conclusions can be derived.

The inference engine applies those rules.

Implementation View

At a high level, a rule-based system works like this:

store known facts

store IF-THEN rules

compare facts with rule conditions

apply matching rules

generate new facts or conclusions

repeat until no useful rule applies

This is why Knowledge-Based AI is easy to inspect.

You can often trace exactly which rule produced which conclusion.

That transparency is one of its biggest strengths.

Concrete Example

Imagine a simple medical expert system.

It may store facts like:

  • patient has fever
  • patient has cough
  • patient has fatigue

And rules like:

IF fever AND cough THEN possible infection

IF possible infection AND fatigue THEN recommend further test

The system does not train on millions of examples.

It applies explicit rules.

That makes the reasoning path easier to explain.

Rule-Based AI vs Machine Learning

This comparison is important.

Rule-Based AI:

  • uses explicit facts and rules
  • depends on human-designed knowledge
  • is easier to explain
  • struggles when rules become too many or too brittle

Machine Learning:

  • learns patterns from data
  • improves through training
  • handles noisy and complex data better
  • can be harder to interpret

So the difference is not just old AI vs modern AI.

It is symbolic reasoning vs data-driven learning.

Both solve problems in different ways.

Forward Chaining vs Backward Chaining

Even with the same rules, inference can move in different directions.

Forward chaining starts from known facts.

It applies rules until it reaches conclusions.

Backward chaining starts from a goal.

It works backward to check whether the needed conditions are true.

Forward chaining:

  • data-driven
  • useful when you want to discover what follows from known facts
  • starts with available evidence

Backward chaining:

  • goal-driven
  • useful when you want to prove or verify a target conclusion
  • starts with the question

The difference is simple:

Forward chaining asks:

“What can I conclude from what I know?”

Backward chaining asks:

“What must be true for this goal to hold?”

Why Inference Engines Matter

The inference engine is the part that makes the system active.

A knowledge base alone only stores information.

Rules alone only define possible logic.

The inference engine applies the rules to produce conclusions.

That is why it is the execution layer of Knowledge-Based AI.

Without inference, the system is just a database.

With inference, it becomes a reasoning system.

Why Expert Systems Were Important

Expert systems are one of the clearest applications of Knowledge-Based AI.

They encode domain knowledge from human experts.

Then they use rules to make recommendations or decisions.

Examples include:

  • medical diagnosis support
  • troubleshooting systems
  • configuration systems
  • rule-based decision support

Their biggest strength is explainability.

Their biggest weakness is maintenance.

As the domain grows, the rule base can become difficult to manage.

Logical Extensions

Knowledge-Based AI also connects to formal reasoning.

Logic programming, such as PROLOG, represents knowledge as logical relations.

Theorem proving uses formal logic to verify statements.

Commonsense reasoning tries to represent everyday assumptions that humans usually take for granted.

These extensions show the same basic idea:

Represent knowledge explicitly.

Then reason over it.

Recommended Learning Order

If Knowledge-Based AI feels broad, learn it in this order:

  1. Knowledge Base
  2. Rule-Based System
  3. Inference Engine
  4. Forward Chaining
  5. Backward Chaining
  6. Expert System
  7. Logic Programming
  8. Theorem Proving
  9. Commonsense Reasoning

This order works because you first understand storage.

Then rules.

Then inference.

Then practical and logical extensions.

Takeaway

Knowledge-Based AI is built on explicit knowledge and reasoning.

The shortest version is:

Knowledge-Based AI = facts + rules + inference

It is not mainly about learning from data.

It is about using stored knowledge to reach conclusions.

If you remember one idea, remember this:

A knowledge-based system becomes intelligent when stored rules can generate new conclusions from known facts.

Discussion

When building AI systems, do you prefer transparent rule-based reasoning, or flexible data-driven learning?

Originally published at zeromathai.com.
Original article: https://zeromathai.com/en/knowledge-based-ai-hub-en/

GitHub Resources
AI diagrams, study notes, and visual guides:
https://github.com/zeromathai/zeromathai-ai