I Built 23 Security Tools That AI Agents Can Use

I wanted a single interface where an AI agent could run WHOIS, pull SSL certs, enumerate subdomains, check CVEs, and query threat intel feeds — all from one prompt.

So I built 23 security tools as an MCP server. Any AI agent that speaks MCP can call them natively.

Here’s what I built, how to set it up, and what I learned.

Setup (2 minutes)

Let me start with the setup because it’s the simplest part.

Add this to your MCP client config:

{
  "mcpServers": {
    "contrast": {
      "command": "npx",
      "args": ["-y", "@anthropic-ai/mcp-remote", "https://api.contrastcyber.com/mcp/"]
    }
  }
}

Works with Claude Desktop, Cursor, Windsurf, Cline, VS Code — anything that speaks MCP.

No API key. No signup. 100 requests/hour free.

The 23 Tools

Recon — “What’s running on this domain?”

Tool What it does
domain_report Full security report — DNS, WHOIS, SSL, subdomains, risk score
dns_lookup A, AAAA, MX, NS, TXT, CNAME, SOA records
whois_lookup Registrar, creation date, expiry, nameservers
ssl_check Certificate chain, cipher suite, expiry, grade (A-F)
subdomain_enum Brute-force + Certificate Transparency logs
tech_fingerprint CMS, frameworks, CDN, analytics, server stack
scan_headers Live HTTP security headers — CSP, HSTS, X-Frame-Options
email_mx Mail provider, SPF/DMARC/DKIM validation
ip_lookup PTR, open ports, hostnames, reputation
asn_lookup AS number, holder, IP prefixes

Real scenario: “Check if any of our subdomains have expiring SSL certs” — the agent calls subdomain_enum, loops through each result with ssl_check, and reports which ones expire within 30 days. Zero code.

Vulnerability — “Is this CVE exploitable?”

Tool What it does
cve_lookup CVE details, CVSS, EPSS score, KEV status
cve_search Search by product, severity, or date range
exploit_lookup Public exploits from GitHub Advisory + ExploitDB

Real scenario: “Find all critical CVEs for Apache httpd from the last 6 months that have public exploits” — one sentence, three tool calls chained automatically.

Threat Intelligence — “Is this IOC malicious?”

Tool What it does
ioc_lookup Auto-detect IP/domain/URL/hash → ThreatFox + URLhaus
hash_lookup Malware hash reputation via MalwareBazaar
phishing_check Known phishing/malware URL check
password_check Breach check via HIBP (k-anonymity, password never sent)
email_disposable Disposable/temporary email detection

Real scenario: You get a suspicious URL in Slack. Paste it and ask “is this safe?” — the agent runs phishing_check + ioc_lookup and tells you if it’s a known threat.

Code Security — “Does my code have vulnerabilities?”

Tool What it does
check_secrets Detect hardcoded AWS keys, tokens, passwords in source
check_injection SQL injection, command injection, path traversal
check_headers Validate security header configuration

Real scenario: Before a PR merge, ask your agent to scan the diff for hardcoded secrets and injection vulnerabilities.

Phone & Email — “Is this contact legit?”

Tool What it does
phone_lookup Validation, country, carrier, line type

What It Looks Like

“Run a full security audit on example.com”

Domain: example.com
Risk Score: 32/100 (Low)

DNS: 6 records found
SSL: Grade A, expires 2027-01-15, TLS 1.3
Headers: 4/7 present (missing CSP, HSTS preload, Permissions-Policy)
Subdomains: 3 found
WHOIS: Registered 1995-08-14, ICANN
Tech: Akamai CDN, nginx

“Check if CVE-2024-3094 has public exploits”

CVE-2024-3094 (xz backdoor)
CVSS: 10.0 CRITICAL
EPSS: 0.947 (top 0.1%)
KEV: Yes — actively exploited
Exploits found: 3

“Is this password breached: hunter2”

EXPOSED in 17,043 breaches
Do NOT use this password.
(checked via k-anonymity — password was never transmitted)

Why MCP?

ContrastAPI is also a REST API with a Node.js SDK. You can curl it from any language.

But MCP changes the workflow:

Without MCP: Call endpoint → parse JSON → decide next step → call another endpoint → parse again → format output.

With MCP: “Audit this domain.” Done.

The agent picks the right tools, chains them, and gives you a summary. You focus on decisions, not plumbing.

Architecture

  • FastAPI + official MCP Python SDK
  • 30 REST endpoints, 23 MCP tools (same backend)
  • 1,115 tests (912 API + 203 C scanner)
  • Domain scanner written in C — scores SSL, DNS, headers, email in under 2 seconds
  • All data from free, public sources — no paid feeds, no vendor lock-in

What I Learned

1. No API key = fastest adoption.
I removed the API key requirement and traffic jumped immediately. Zero friction wins. The free tier (100 req/hr) is generous enough that nobody has hit the limit yet.

2. MCP users are stickier.
MCP users make more requests per session than REST users. Once an agent has access to the tools, it chains them naturally — a single prompt can trigger 5-10 tool calls.

3. Get listed everywhere, early.
mcp.so, mcpservers.org, Smithery — these directories drive most of the discovery right now. The ecosystem is early and low-competition.

Limitations

Being transparent about what this isn’t:

  • Passive only — no port scanning, no active exploitation. This is OSINT and public data, not a pentest tool.
  • Rate limited — 100 req/hr free, 1000/hr on Pro ($19/mo). Enough for individual use, not bulk scanning.
  • Solo project — I’m one developer. Response times are fast, but I don’t have an SRE team on-call.
  • You don’t need API keys — we handle the integrations (Shodan, AbuseIPDB, ThreatFox, NVD, and more). No vendor accounts to set up on your end.

Try It

  • GitHub: github.com/UPinar/contrastapi
  • MCP setup: contrastcyber.com/mcp-setup
  • Web scanner: contrastcyber.com
  • API docs: api.contrastcyber.com

Free. Open source. No API key.

If you find it useful, a ⭐ on GitHub helps more than you think.

What security tools do you wish your AI agent could use? I’m always looking for what to build next.

Java Annotated Monthly – April 2026

It’s safe to say March was defined by one thing: Java 26. In this issue of Java Annotated Monthly, we’ve curated a rich selection of articles to help you get the full picture of the release. Marit van Dijk joins us as the featured guest author, bringing her expertise to help you navigate the changes with confidence. Alongside our Java 26 coverage, you’ll find our regular roundup of AI developments, Spring updates, Kotlin news, industry trends, and community reads that caught our eye.

Featured Content

Marit van Dijk

Marit van Dijk is a Java Champion and Developer Advocate at JetBrains with over 20 years of software development experience. She’s passionate about building great software with great people, and making developers’ lives easier.

Marit regularly presents at international conferences and shares her expertise through webinars, podcasts, blog posts, videos, and tutorials. She’s also a contributor to the book 97 Things Every Java Programmer Should Know (O’Reilly Media).

March held a lot of interesting things for Java. First of all, there was the Java 26 release on March 17. You can read all about Java 26 in IntelliJ IDEA on the blog, and find more links on Java 26 in the Java sections below.

Also in March, JavaOne took place in Redwood Shores, USA. During the community keynote, our colleague Anton Arhipov talked about 25 years of IntelliJ IDEA. In case you missed it, we also did a Duke’s Corner podcast and a Foojay podcast on the same topic. And of course, the IntelliJ IDEA documentary was released this month. Also at JavaOne, we announced that Koog is coming to Java, if you want to try JetBrains’ Koog AI agent with Java instead of Kotlin.

IntelliJ IDEA 2026.1 was just released. Of course we have Java 26 support from day one, as well as improvements to the debugger for virtual threads, support for new Kotlin features, Spring Data and Spring Debugger features, new AI features, and more. You can read all about it on the blog or watch our release video.

The release of Java 26 also means that Piotr Przybył and I updated our talk, Learning modern Java the playful way, for Java 26. You can watch the recording from Voxxed Days Amsterdam, or catch us at multiple events around Europe. 

Java News

Check out all the Java news highlights in March: 

  • Java News Roundup 1, 2, 3, 4, 5
  • Java 26: What’s New?
  • HTTP Client Updates in Java 26
  • Java Performance Update: From JDK 21 to JDK 25
  • Quality Outreach Heads-up – JDK 27: Removal of ‘java.locale.useOldISOCodes’ System Property
  • Episode 51 “Unboxing Java 26 for Developers” 
  • Java 27 – Better Language, Better APIs, Better Runtime
  • Foojay Podcast #92: Java 26 Is Here: What’s New, What’s Gone, and Why It Matters in 2026
  • Java 26 in definitely UNDER 3 minutes
  • JDK 26 Security Enhancements

Java Tutorials and Tips

You can never have too many tips for getting more out of Java:

  • Java 26 for DevOps
  • Java 26 Is Here, And With It a Solid Foundation for the Future
  • Closed-world assumption in Java
  • JavaScript (No, Not That One): Modern Automation with Java
  • Redacting Sensitive Data from Java Flight Recorder Files
  • Foojay Podcast #91: 25 Years of IntelliJ IDEA: The IDE That Grew Up With Java
  • Vulnerable API usage: Is your Java code vulnerable?
  • Java 26 is boring, and that’s a good thing
  • Episode 49 “LazyConstants in JDK 26” 
  • Empty Should be Empty
  • Testing Elasticsearch. It just got simpler
  • A Bootiful Podcast: Cay Horstmann, legendary Java professor, author, lecturer 
  • Episode 50 “Towards Better Checked Exceptions” 
  • How is Leyden improving Java Performance? 1, 2, 3
  • Java Is Fast. Your Code Might Not Be.
  • Data Oriented Programming, Beyond Records 
  • Evolving the Java Language: An Inside Perspective
  • Hybrid search with Java: LangChain4j Elasticsearch integration
  • Secure Coding Guidelines for Java
  • Estimating value of pi (π) using Monte Carlo Simulation and Vector API
  • Javable: generate Java-friendly wrappers for Kotlin with KSP

Kotlin Corner

Stay sharp with the latest Kotlin news and practical tips:

  • Kotlin 2.3.20 Released 
  • Amper 0.10 – JDK Provisioning, a Maven Converter, Custom Compiler Plugins, and More 
  • The klibs.io source repository was made public.
  • Building a Deep Research Agent with Koog — Teaching Your Agent to Think in Phases 
  • Koog Comes to Java: The Enterprise AI Agent Framework From JetBrains
  • Introducing Tracy: The AI Observability Library for Kotlin 
  • KotlinConf’26 Speakers: In Conversation with Josh Long 

AI 

Plenty of AI reads this month. Pick what catches your eye:

  • Intelligent JVM Monitoring: Combining JDK Flight Recorder with AI
  • AI coding skills from the engineers who build the JVM ecosystem
  • Vibe Coding, But Production-Ready: A Specs-Driven Feedback Loop for AI-Assisted Development
  • Busting AI Myths and Embracing Realities in Privacy & Security
  • Shaping Jakarta Agentic AI Together – Watch the Open Conversation
  • how i automated my life with mcp servers
  • 10 things i hate about ai
  • Writing an agent skill 
  • Hacking AI – How to Survive the AI Uprising
  • Stop Fighting Your AI: Engineering Prompts That Actually Work
  • Four Patterns of AI Native Development
  • Interactive Rubber Ducking with GenAI 
  • The Oil and Water Moment in AI Architecture
  • Look Inside a Large Language Model to Become a Better Java Developer
  • A Senior Engineer Tries Vibe Coding
  • How We Built a Java AI Agent by Connecting the Dots the Ecosystem Already Had 

Languages, Frameworks, Libraries, and Technologies

Spring updates and more tech news, all in one place:

  • This Week in Spring 1, 2, 3, 4
  • Data Enrichment in MongoDB
  • Supercharge your JVM performance with Project Leyden and Spring Boot by Moritz Halbritter
  • A Typo Led to the Creation of Spring Cloud Contract • Marcin Grzejszczak & Jakub Pilimon • GOTO 2026
  • A Bootiful Podcast: Neo4j legend Jennifer Reif
  • A Bootiful Podcast: Spring Messaging Legend Soby Chacko
  • Blending Chat with Rich UIs with Spring AI and MCP Apps
  • Java Microservices(SCS) vs. Spring Modulith
  • Moving beyond Strings in Spring Data
  • Quarkus has great performance – and we have new evidence
  • Modeling One-to-Many Relationships in Java with MongoDB
  • Clean Architecture with Spring Boot and MongoDB

Conferences and Events

Pick your next events to attend:

  • Spring I/O – Barcelona, Spain, April 13–15; Come say hi at the JetBrains booth and join the community run! 
  • Java Day Istanbul – Istanbul, Türkiye, April 17–18; Anton Arhipov is a speaker.  
  • JCON EUROPE – Cologne, Germany, April 20–23; Marit van Dijk will talk about learning modern Java the playful way.
  • Great International Developer Summit – Bengaluru, India, April 21–24; Join Siva Katamreddy’s talk on Spring AI + MCP. 
  • Devoxx France – Paris, France, April 22–24; Check out the talks by Anton Arhipov and Marit van Dijk.  
  • Devoxx Greece – Athens, Greece, April 23–25; Marit van Dijk is a speaker. 
  • Voxxed Days Bucharest – Bucharest, Romania, April 28–29; And if you haven’t caught Marit van Dijk during this busy month of hers, here’s the last chance to hear her speak in April.

Culture and Community

Your go-to section to slow down and think about the industry, self-growth, and more:

  • Mindful Leadership in the Age of AI
  • Can we still make software that sparks joy?
  • Information Flow: The Hidden Driver of Engineering Culture
  • Beyond the Code: Hiring for Cultural Alignment
  • Build a Spaced Repetition Flashcard API with Spring Boot & MongoDB (Part 1)
  • Where Do Humans Fit in AI-Assisted Software Development?
  • Green IT: How to Reduce the Impact of AI on the Environment
  • Does Language Still Matter in the Age of AI? Yes — But the Tradeoff Has Changed
  • IntelliJ IDEA: The Documentary | An origin story 
  • The Software Architect Elevator 

And Finally…

Top picks from the IntelliJ IDEA blog:

  • What’s fixed in IntelliJ IDEA 2026.1
  • Java 26 in IntelliJ IDEA
  • IntelliJ IDEA’s New Kotlin Coroutine Inspections, Explained
  • Cursor Joined the ACP Registry and Is Now Live in Your JetBrains IDE
  • Sunsetting Code With Me
  • Koog Comes to Java: The Enterprise AI Agent Framework From JetBrains
  • AI-Assisted Java Application Development with Agent Skills
  • Core JavaScript and TypeScript Features Become Free in IntelliJ IDEA

That’s it for today! We’re always collecting ideas for the next Java Annotated Monthly – send us your suggestions via email or X by April 20. Don’t forget to check out our archive of past JAM issues for any articles you might have missed!

KVerify: A Two-Year Journey to Get Validation Right

KVerify: A Two-Year Journey to Get Validation Right

In December 2023, I wrote a small article about a utility I called ValidationBuilder. The idea was simple: a DSL where you’d call validation rules as extension functions on property references, collect all violations in one pass, and get a result back. Ktor-specific, but the concept was portable.

I published it and moved on.

Except I didn’t.

The Problem

I came to Kotlin without a Java background. Spring was my first serious framework, and I didn’t really understand it.

My issue was specific: I was reaching for Kotlin object declarations where Spring wanted managed beans. I was hardcoding configuration values that should have been injected, not because I didn’t know configuration existed, but because I couldn’t figure out how to bridge Spring’s dependency injection with Kotlin’s object. I was working around my own knowledge gaps without realizing that’s what I was doing. Everything ran, so I assumed everything was fine.

Eventually I moved to Ktor. Deliberately — I wanted less magic, more control. What I got instead was a different kind of overwhelming. Ktor gives you primitives and expects you to build the structure yourself. No built-in validation, no standard error handling, no guardrails. You figure it out or you don’t.

That turned out to be one of the better things that happened to me as a developer. Without a framework making decisions for me, I had to actually learn why certain patterns exist. Architecture, separation of concerns, what makes code maintainable — I learned it the hard way, by building things from scratch and getting them wrong.

But the validation problem stayed unsolved. Hand-written if checks everywhere. Error messages as hardcoded strings. No reuse, no structure. I looked at Konform and Valiktor. Valiktor was already abandoned. Konform was actively maintained and had a reasonable DSL — but I looked at it, decided it would take too long to learn properly, and moved on. Which, in hindsight, is a funny reason to spend the next two years building my own library from scratch.

I didn’t have precise words for what was wrong. I just knew the answer wasn’t there.

So in December 2023, I wrote my own.

The Library

Seven months later, in July 2024, I started a proper repository. Not because I had a clear vision of what the library should be — I just wanted to stop copying the same validation code between projects. A few lines of dependency instead of manually replicating the ValidationBuilder every time.

That was the entire plan.

NamedValue

The library started with a concept called NamedValue — a simple wrapper: a value and a name, travelling together.

data class NamedValue<T>(val name: String, val value: T)

The reason was practical. Validation error messages need to reference the field that failed — “password must not be blank”, “age must be at least 18”. Without NamedValue, you’d have to pass the field name to every rule call manually. With it, the name was always there, carried alongside the value, available to any rule that needed it. Rules were just extension functions on NamedValue<T>:

password.named("password").notBlank().ofLength(8..255)

It felt clean. The infix named read naturally, the rules chained, and the name showed up in every error message automatically.

The problem was that it had quietly made a decision I hadn’t noticed yet — it tied the field name to the value itself, rather than to the validation context. That distinction wouldn’t matter for a while. And then it would matter enormously.

The Iterations

The 1.x releases came quickly. November, December 2024 — versions shipping, rules working, NamedValue doing its job. On paper, the library was functional.

But something kept feeling off. The ValidationContext had quietly accumulated weight:

fun validate(message: String, predicate: Predicate)
fun validate(rule: Rule<Unit>)
fun <T> T.validate(vararg rules: Rule<T>): T
fun <T> NamedValue<T>.validate(message: MessageCallback<T>, predicate: ValuePredicate<T>): NamedValue<T>
fun <T> NamedValue<T>.validate(ruleCallbackWithMessage: NamedValueRuleCallback<T>): NamedValue<T>

Five ways to validate. Type aliases for every possible callback shape. Each one added to solve a real problem — and together they made the interface feel like it was trying to be everything at once.

The fix was to stop and ask: what does a validation context actually need to do? The answer was one thing: decide what happens when a rule fails.

interface ValidationContext {
    fun onFailure(violation: Violation)
}

That’s it. Everything else was the rule’s problem, not the context’s.

This pattern repeated itself throughout 2025. Build something, feel the weight accumulate, find the one thing it actually needed to do, cut everything else. The library kept getting smaller. Each simplification felt right for a week, then revealed the next thing that was still wrong.

And through all of it, NamedValue stayed. Untouched. Seemingly fine.

5 AM

This was the state of things through most of 2025. Iterating. Releasing. Something always slightly wrong.

There were nights where I’d wake up at five — suddenly, with something already forming in my head. Not a diagram or a grand insight. Just code. An API shape. A specific thing that had been bothering me that now had an answer.

On July 14, 2025, that happened at 5:29 AM. I made two commits — a package restructuring I had been putting off for days — and started my day four hours early. I napped later and went to bed earlier that night.

The sleep during those periods wasn’t great. Not quite fever dreams, but something adjacent — restless, with the problem still running somewhere in the background. I wasn’t suffering dramatically. I was just genuinely stuck on something, and my brain apparently didn’t get the memo that work was supposed to stop.

Spring, Revisited

In January 2026, I decided to learn Spring again. Not to go back to it — just to understand it better. And since I had a library now, I figured I’d try using it alongside Spring’s own validation.

The code I wrote looked like this:

fun validate(context: ValidationContext) =
    with(context) {
        ::title.toNamed().verifyWith(
            provider.namedNotBlank(),
            provider.namedLengthBetween(3..255) { violation("Title must be between 3 and 255 characters long") }
        )
    }

Spring’s version, sitting right above it in the same file:

@NotBlank
@Length(min = 3, max = 255)
val title: String

I wasn’t trying to beat Spring at its own game — annotations are a different tool for a different philosophy. But looking at both side by side, something became impossible to ignore. The toNamed() call on every property. The provider object. The fact that adding a custom reason meant constructing a violation manually and losing the field name in the process. The constant risk of accidentally applying a regular rule to a named value or vice versa.

I had been living inside this DSL for over a year. I had stopped seeing it. Comparing it to anything — even Spring, which I had originally left — made the friction visible again.

NamedValue wasn’t a small ergonomics issue. It was a load-bearing flaw. And it had been there from the beginning.

The Decision

By January 2026, the problem had a name.

Not NamedValue specifically — the concept behind it. Rules and naming metadata were coupled. A rule for a named value was a different type than a rule for a plain value. That meant two rule sets, two providers, two of everything. Users had to constantly think about which kind of rule to reach for. The maintenance cost was real and growing.

I knew these two things shouldn’t coexist. What I didn’t know was how to separate them.

I had asked AI assistants about this problem before. Multiple times. Nothing useful came back — or maybe it did and I wasn’t ready to hear it yet. But one day in January 2026, I described the problem again, and the answer that came back was something I had been circling for two years without landing on: put the metadata in the context, not in the rule or the value. Rules validate values. Context carries where those values live.

I knew it was right immediately. Not because the AI said so — but because two years of the wrong model had made the right one recognizable the moment I saw it.

NamedValue was removed. Not in a single dramatic commit — it was simply no longer needed. The path would live in the context. Property references would put it there automatically. Rules would stay pure.

But “the context” now meant something different. The old ValidationContext — the interface that decided what to do on failure — was renamed to ValidationScope. A new ValidationContext took its place: a pure, immutable carrier of metadata. Same name, completely different responsibility. The scope executes. The context carries.

What followed was three months of more progress than the previous two years combined.

The New Context

Once NamedValue was gone, a new question appeared immediately: if the path lives in the context, how does the context store it?

The first attempt was a linked list — each context node held one path segment and a reference to its parent. Clean in theory, recursive in practice. It worked but felt fragile for deep nesting.

Two days later, I replaced it with something more ambitious — a design modeled directly after Kotlin’s CoroutineContext. Elements stored by key, retrievable by type, composable with +. It looked elegant:

interface ValidationContext {
    operator fun <E : Element> get(key: Key<E>): E?
    operator fun plus(context: ValidationContext): ValidationContext

    interface Key<E : Element>
    interface Element : ValidationContext {
        val key: Key<*>
    }
}

It lasted twenty-two days.

The flaw was fundamental. CoroutineContext uses key-based replacement — if you add an element with a key that already exists, it replaces the old one. That’s the right behavior for something like a coroutine dispatcher, where you want exactly one. But a validation path needs multiple NamePathElements to coexist. A path like user.friends[0].user requires the same element type to appear multiple times — key replacement silently destroyed them.

The next attempt dropped the key system entirely. Elements became a plain list, retrieved by type filtering. Simpler, but every + operation allocated a new list. Fine for shallow contexts, wasteful for deep ones.

Then fold replaced the list — a binary tree of CombinedContext(left, right) nodes, traversed lazily. No intermediate allocations. Better.

Then, two weeks later, one more simplification: replace fold with Iterable. Not because fold was wrong — but because Iterable is something every Kotlin developer already understands. filter, filterIsInstance, for loops — all of it works immediately, with no new protocol to learn. The context became something you could hand to any standard library function without explanation.

interface ValidationContext : Iterable<ValidationContext.Element>

That was the final form. Six weeks, five implementations, one interface.

The Rules Problem

There was one more problem to solve.

The scope controlled what happened on failure — collect, throw, or anything else. But “anything else” was the interesting part. What if you wanted to stop after the first violation and skip all remaining rules? The scope needed to intercept rule execution to do that.

So rules came back as an explicit abstraction. A rule received the context and the value, ran a check, and returned a violation or null:

interface Rule<in T> {
    fun check(context: ValidationContext, value: T): Violation?
}

The scope would call it, inspect the result, and decide what to do next:

fun <T> enforce(rule: Rule<T>, value: T) {
    val violation = rule.check(validationContext, value) ?: return
    onFailure(violation)
}

This worked — until you tried to extend a scope with additional context. Kotlin’s by delegation copies the implementation from the delegated object, not the receiver. So when ContextExtendedValidationScope delegated to the original scope, enforce ran with the original scope’s validationContext — not the overridden one. The extended context, the one with the path segment you just added, was completely invisible to the rule.

internal class ContextExtendedValidationScope<out T : ValidationScope>(
    val originalValidationScope: T,
    val additionalContext: ValidationContext,
) : ValidationScope by originalValidationScope {
    override val validationContext: ValidationContext
        get() = originalValidationScope.validationContext + additionalContext
}

The overridden validationContext was there. The enforce implementation just never called it.

I tried to fix the delegation chain. There was no clean way.

The solution was to stop passing context and value to the rule entirely. A rule is just a check — it closes over whatever it needs from the surrounding code. By the time enforce is called, the closure already has the right scope, the right context, the right value:

fun interface Rule {
    fun check(): Violation?
}

scope.enforce {
    if (value.isBlank()) NotBlankViolation(
        validationPath = scope.validationContext.validationPath(),
        reason = "must not be blank",
    ) else null
}

value and scope are already in the closure. The rule doesn’t need parameters. The delegation problem disappeared — because there was nothing left to delegate except the outcome.

Polishing

By late February 2026, the architecture was right. What remained was a different kind of work — not building, but removing.

The guiding principle was simple: leave only the shape. Strip everything down to the minimal core, then test it, then add back only what proved it deserved to exist. Any helper function that wasn’t load-bearing — gone. Any abstraction that existed for convenience rather than necessity — gone.

The commit messages tell the story plainly. reduce public API surface to minimal stable core. replace class-based Rule hierarchy with extension functions. Rule composition operators — and, or, ! — removed. The Verification interface removed, kept as a concrete class. FirstViolationValidationScope removed entirely.

It wasn’t painful. Overshipping and dealing with it after a release felt far worse than cutting now. A public API is a promise — every method you expose is something you have to maintain, something users will depend on, something you can’t easily take back. The smaller the surface, the more deliberate every addition has to be.

What remained after the cuts was small enough to hold in your head. Clean enough to document properly. Stable enough to release.

On March 23, 2026 — two years and nine months after the first commit — KVerify 2.0.0 shipped.

The Comparison

This is the core of KVerify on its first commit, July 13, 2024:

// Collect all violations
inline fun collectViolations(block: CollectAllContext.() -> Unit): ValidationResult

// Fail on first violation
inline fun failFast(block: FailFastContext.() -> Unit): ValidationResult

// Rules as extension functions
fun NamedString.notBlank(message: String = "$name must not be blank"): NamedString =
    validate(message) { it.isNotBlank() }

// Violations collected into a result
val isValid get() = this is ValidationResult.Valid
inline fun ValidationResult.onInvalid(block: (List<ValidationException>) -> Unit): ValidationResult
inline fun ValidationResult.onValid(block: () -> Unit): ValidationResult

And this is KVerify 2.0.0:

// Collect all violations
inline fun validateCollecting(block: CollectingValidationScope.() -> Unit): ValidationResult

// Fail on first violation
inline fun validateThrowing(block: ThrowingValidationScope.() -> T): T

// Rules as extension functions
fun Verification<String>.notBlank(reason: String? = null): Verification<String> =
    apply {
        scope.enforce {
            if (value.isBlank()) NotBlankViolation(
                validationPath = scope.validationContext.validationPath(),
                reason = reason ?: "Value must not be blank",
            ) else null
        }
    }

// Violations collected into a result
val isValid: Boolean get() = violations.isEmpty()
inline fun ValidationResult.onInvalid(block: (List<Violation>) -> Unit)
inline fun ValidationResult.onValid(block: () -> Unit)

The user-facing API changed completely. The underlying shape did not.

Two strategies — collect or throw. Rules as extension functions. A result you can branch on. onValid, onInvalid. These were all there on day one. Not because the first version was well-designed — it wasn’t. But because the instinct pointing toward them was right.

What two years bought was the understanding of why they were right. The NamedValue coupling, the overengineered contexts, the delegation trap, the CoroutineContext detour — none of it was wasted. Each wrong turn made the correct shape more recognizable. By the time the right architecture appeared, it was obvious. Not because it was simple — because everything else had already been tried.

Experience isn’t knowing the answer. It’s having been wrong enough times to recognize it when you finally see it.

What it looks like today

data class Address(val street: String, val city: String, val postalCode: String)
data class RegisterRequest(val username: String, val email: String, val age: Int, val address: Address)

fun ValidationScope.validate(request: RegisterRequest) {
    verify(request::username).notBlank().minLength(3).maxLength(20)
    verify(request::email).notBlank()
    verify(request::age).atLeast(18)

    pathName("address") {
        verify(request.address::street).notBlank()
        verify(request.address::city).notBlank()
        verify(request.address::postalCode).exactLength(5)
    }
}

val result = validateCollecting { validate(request) }

result.violations
    .filterIsInstance<PathAwareViolation>()
    .forEach { println("${it.validationPath}: ${it.reason}") }

If any of this resonated — the frustration with existing tools, the design problems, or just the library itself — KVerify is on GitHub. A star goes a long way for a solo project. And if you try it and something feels wrong, open an issue. Two years of iteration taught me that the best feedback comes from someone actually using it.

Getting Data from Multiple Sources in PowerBI: A Practical Guide to Modern Data Integration

INTRODUCTION

According to Microsoft, Power BI is a complete reporting solution that offers data preparation, data visualization, distribution, and management through development tools and an online platform. Power BI can scale from simple reports using a single data source to reports requiring complex data modeling and consistent themes. Use Power BI to create visually stunning, interactive reports to serve as the analytics and decision engine behind group projects, divisions, or entire organizations.
The foundation of every successful Power BI report is reliable data ingestion. Before a report can be successfully created, ability to extract data from various data sources is the first crucial step to building an effective report. Interacting with SQL Server is different from Excel, so learning the nuances of how data connection from different sources works is important in order to be able to use other PowerBI tools for effective decision making.
In most real-world business contexts, data is typically spread across multiple sources rather than confined to one. A data analyst may need to integrate data from Excel files, CSVs, SQL Server databases, PDFs, JSON APIs, and SharePoint folders into a unified report. Power BI is well-equipped for this task, offering powerful tools like Get Data and Power Query to efficiently connect, combine, and transform data from various sources. This guide explores how Power BI enables multi-source data integration and provides a step-by-step approach to implementing it effectively.
In this guide, you will learn how to:
• Connect Power BI to multiple data sources efficiently
• Use Power Query to preview and explore your data
• Detect and resolve data quality issues early
• Build a strong foundation for accurate data modeling and reporting.

Architecture Overview

At a high level, Power BI follows a layered architecture which consists of:
• Power BI Desktop as the reporting and modeling tool
• Multiple data sources, including:
o Excel and Text/CSV files
o SQL Server databases
o JSON and PDF files
o SharePoint folders
All data flows into Power BI through Power Query, where it is reviewed and prepared before loading into the data model.
Connecting Data from Multiple Sources
Power BI allows you to connect to a wide range of data sources. Below are step-by-step guides for each major source.

Step 1: Connecting to Excel

  1. Open Power BI Desktop
    Image 1
  2. Navigate to Home → Get Data → Excel
    Image 2
  3. Browse and select your Excel file
    Image 3
  4. In the Navigator window, select the required sheets or tables
    Image 4
  5. Click Load (to import directly) or Transform Data (to clean first)

Step 2: Connecting to Text/CSV Files

  1. Open Power BI Desktop
  2. Navigate to Home → Get Data → Text/CSV
    Image 5
  3. Browse and select the CSV file (e.g., MultiTimeline.csv)
    Image 6
  4. Preview the dataset in the dialog window
    Image 7
  5. Click Load or Transform Data

Step 3: Connecting to PDF

  1. Open Power BI Desktop
  2. Navigate to Home → Get Data → PDF
    Image 8
  3. Select the PDF file
    Image 9
  4. Wait for Power BI to detect available tables
  5. Select the desired table(s)
  6. Click Load or Transform Data
    Image 10

Step 4: Connecting to JSON

  1. Open Power BI Desktop
  2. Navigate to Home → Get Data → JSON
    Image 11
  3. Select the JSON file or input API endpoint
    Image 12
    Image 13
  4. Load the data into Power Query
  5. Expand nested fields to structure the data properly
    Image 14
  6. Click Close & Apply

Step 5: Connecting to SharePoint Folder

  1. Open Power BI Desktop
    Image 15
  2. Navigate to Home → Get Data → SharePoint Folder
    Image 16
  3. Enter the SharePoint site URL
  4. Click OK and authenticate if required
    Image 17
  5. Select files from the folder
  6. Click Combine & Transform Data
    Image 18

Step 6: Connecting to MySQL Database

  1. Open Power BI Desktop
    Image 19
  2. Navigate to Home → Get Data → MySQL Database
    Image 20
  3. Enter the server name and database
    Image 21
  4. Provide authentication credentials and click connect
    Image 22
  5. Select the required tables
  6. Click Load or Transform Data
    Image 23

Step 7: Connecting to SQL Server

  1. Open Power BI Desktop
  2. Navigate to Home → Get Data → SQL Server
    Image 24
  3. Enter the server name (e.g., localhost)
    Image 25
  4. Leave the database field blank (or specify one if needed)
  5. Click OK
  6. Select authentication method (e.g., Windows credentials)
  7. In the Navigator pane, expand the database (e.g., AdventureWorksDW2020)
  8. Select required tables such as:
    o DimEmployee
    o DimProduct
    o DimAccount
  9. Click Transform Data to open Power Query Editor
    Image 27

Step 8: Connecting to Web Data

  1. Open Power BI Desktop
  2. Navigate to Home → Get Data → Web
    Image 28
  3. Enter the URL of the web page or API
    Image 29
  4. Click OK
  5. Select the data table or structure detected
  6. Click Load or Transform Data
    Image 30

Step 9: Connecting to Azure Analysis Services

  1. Open Power BI Desktop
  2. Navigate to Home → Get Data → Azure → Azure Analysis Services
    Image 31
  3. Enter the server name
  4. Select the database/model
  5. Choose connection mode (Live connection recommended)
  6. Click Connect
    Image 32

Conclusion

Integrating data from multiple sources in Microsoft Power BI is a foundational skill for modern data analysts. By understanding the architecture and following a structured approach, you can transform fragmented datasets into cohesive, insight-driven reports. Ultimately, great analytics begins with great data and great data begins with how well you connect, prepare, understand and use it to make business decisions.
Mastering tools like Power Query and applying best practices in data modeling will significantly enhance the quality and performance of your analytics solutions.

Dark Dish Lab: A Cursed Recipe Generator

What I Built

Dark Dish Lab is a tiny, delightfully useless web app that generates cursed food or drink recipes.

You pick:

  • Hated ingredients
  • Flavor chaos (salty / sweet / spicy / sour)

Then it generates a short “recipe” with a horror score, a few steps, and a warning.

It solves no real-world problem. It only creates regret.

Demo

  • YouTube demo

Code

  • GitHub repo

How I Built It

  • Frontend: React (Vite)
    • Ingredient + flavor selection UI
    • Calls backend API and renders the generated result
  • Backend: Spring Boot (Java 17)
    • POST /api/generate endpoint
    • Generates a short recipe text and returns JSON
  • Optional AI: Google Gemini API
    • If AI is enabled and a key is provided, it asks Gemini for a very short recipe format
    • If AI is disabled or fails, it falls back to a non-AI generator
  • Notes
    • Only Unicode emojis are used (no emoji image assets)
    • API keys are kept in local env files and not committed

How I Leveraged Google AI

Gemini (via the Gemini API) is the text generator behind Dark Dish Lab’s “cursed recipe” output.

Instead of using AI as a generic chatbot, I use it as a formatting engine:

  • Input: selected ingredients + flavor axes (salty/sweet/spicy/sour)
  • Output: a short, structured recipe (name, 3 steps, 1 warning)

To keep the experience stable and demo-friendly, I added a few guardrails:

  • Server-side API key only (kept in backend/.env.local, not committed)
  • Strict prompt constraints (no emojis, short length, fixed format)
  • Fallback generator when Gemini is disabled/unavailable
  • Output trimming to avoid unexpectedly long responses

Prize Category

  • Best Use of Google AI — I integrated the Gemini API to generate short, structured cursed recipes from the user’s selected ingredients and flavor axes.
  • Community Favorite angle — Try to generate the most cursed combo you can. Comment your ingredient + flavor picks and the horror score you got. Bonus points if your friends refuse to read it.

Why Developer Productivity Engineering is Underrated

This article was originally published on igorvoloc.com
I write about Developer Productivity Engineering — DX, AI-assisted dev, legacy migration, and business impact from the trenches.

I tracked my own time for two weeks, logging each hour. The numbers were discouraging enough that I stopped. Most of what I called ‘development time’ was actually spent waiting, searching, context-switching, rerunning flaky tests, or digging through outdated documentation and message threads.

This isn’t unique to me. It’s common, and most teams have come to accept it as normal. This invisible waste is what Developer Productivity Engineering (DPE) aims to eliminate.

There’s a deeper reason these problems persist: the people with the power to fund solutions almost never hear about them. Developers spot the slow builds, flaky pipelines, and broken docs immediately — but that pain rarely travels up the chain. Most engineers simply work around it, patching things locally or absorbing the friction as part of the job. By the time leadership hears about the issues, it’s usually reached crisis level.

The Waste Is Real

Developers lose hours each week to friction instead of building a product.1 58% report losing five or more hours weekly to inefficiencies like slow builds, broken tooling, and knowledge gaps—a number I’ve observed almost everywhere, and which matches the data.

62% of developers cite technical debt as their top frustration, while over 10% work at companies lacking CI/CD, automated testing, or DevOps.2

61% of developers spend at least 30 minutes per day searching for information.3 Over a third are blocked by knowledge silos, and a similar share re-answer questions they’ve already solved.3 People call it a documentation problem, but it’s also an architecture and onboarding problem. Knowledge and culture get little attention, do the most damage, and rarely show up on dashboards.

On a team of 50 developers with an average salary of $150K (based on US developer salary benchmarks), 30 minutes lost per person each day adds up to nearly $500K per year. That’s the equivalent of three full-time engineers lost to inefficiency.

The waste is the problem DPE aims to solve. Most organizations under-invest in it.

Nobody’s Fixing Developer Productivity — Here’s Why

Your sales team gets a CRM. Nobody asks them to manage deals in a spreadsheet. Finance gets proper accounting software. Nobody questions the ROI on that.

Developers, often among the highest-paid and hardest to replace, must work around friction. They deal with broken CI pipelines, undocumented services, and spend time searching for information that should be readily available.

Here’s the part most managers don’t see: staff and senior engineers on your team are already doing DPE work. They just don’t call it that.

They notice the build is slow and spend a Saturday morning tracking down why. They write the runbook nobody asked for. They push for the migration nobody wants to focus on.

This is DPE work: invisible, uncredited, and taking time away from what teams actually measure. I’ve watched good engineers burn out this way — doing the most important work in the company while getting zero credit for it.

Half of tech managers say their companies measure productivity or DevEx, but only 16% have anyone dedicated to improving them.4

66% of developers say productivity metrics miss their real contributions.5 Most rely on crude measures: lines of code, story points, commit frequency. Engineers can game these in a month. Everyone knows the numbers aren’t real. And 60% of engineering orgs follow no specific measurement framework,1 though 90% rate improving productivity as a top initiative (8.2/10 on average).1

DORA’s 2025 research, surveying nearly 5,000 professionals, confirms the pattern: only 40% of teams excel at both speed and stability.6 The rest are held back by process friction and foundational gaps — not trade-offs. The “speed vs. stability” compromise is a myth: the best teams do both.

The core issue is that measuring is easy, but fixing is hard. If you track but never fix, you’re just surveilling. Trust drops, and nothing improves.

What Developer Productivity Engineering Actually Is

DPE is broader than developer experience. DX is one part, but not the whole. DPE treats the developer ecosystem as a system with four interconnected pillars:

  • Developer Experience — feedback loops, cognitive demand, flow state
  • Engineering Infrastructure — build systems, CI/CD, test automation, platform tooling, legacy migration
  • Knowledge and Culture — how information flows, gets captured, and outlasts turnover
  • AI Augmentation — how AI tools sit on top of and multiply whatever the other three produce

If one pillar is weak, the others can’t compensate. Developer experience is where most of that friction shows up first — and the most useful way to think about DX focuses on three dimensions that include most of what matters: feedback loops, cognitive demand, and flow state.7

  • Feedback loops are the most obvious. How long from git push to “did this work?” If the answer is 20 minutes, you’ve already lost the thread.
  • Cognitive demand is trickier. It’s not only about whether the code is hard; it’s about how much you need to remember to make a safe change. Every undocumented service boundary and each hidden dependency adds a hidden load. You only notice it when a new hire takes three months to become productive.
  • Flow state is the 2-3-hour window when someone is actually solving a hard problem. How often does the environment protect it?

The biggest influence on DX isn’t tooling — it’s cultural and organizational factors: how product management works, how decisions get made, and how teams communicate.7 Tooling matters, but the org sets the ceiling. This is why so many “DX improvement” initiatives fail: they buy tools without addressing the environments in which those tools operate.

Legacy systems are a drag on every engineer who touches them. Patterns like strangler fig and incremental migration work, but only if someone gives them priority. In most organizations, that person is a staff engineer making the case to leadership who don’t feel the friction directly.

And then there’s AI, which builds on top of all three pillars. According to DORA’s survey of nearly 5,000 professionals, over 80% of developers say AI has increased their productivity.6 About a third of new code in surveyed organizations now comes from AI.8

But AI only multiplies the system you already have. If your DX is good, AI tools make it even better. If your DX is broken, AI just makes things break faster.

The DORA 2025 report — the most comprehensive study of AI in software development to date — calls AI “an amplifier.”6 It magnifies the strengths of high-performing organizations and the dysfunctions of struggling ones. At Adidas, teams with loosely coupled architectures and fast feedback loops saw 20–30% productivity gains from AI and a 50% increase in “Happy Time.” Teams with slower feedback loops due to tight coupling saw little to no benefit.6

The Fix Exists — And It Compounds

The ROI Is Real

Intervention Result Source
DX Core 4 framework 3-12% increase in engineering efficiency DX (Core 4) 9
DX Core 4 framework 14% increase in R&D time on feature development DX (Core 4) 9
Top-quartile DXI 43% higher employee engagement DX (DXI) 10
AI + fast feedback loops (Adidas) 20-30% productivity gains, 50% more “Happy Time” DORA 2025 6
AI + developer training (Booking.com) Up to 30% increase in merge requests DORA 2025 6

A single point increase on the Developer Experience Index saves 13 minutes per developer per week. On a 200-person team, a 5-point improvement adds up to 11,000 developer-hours per year. That’s equivalent to five full-time engineers without additional hiring.

DPE interventions compound over time. Better DX means less time lost to tools and faster delivery. It also improves customer satisfaction — and this is where the numbers get serious.

24.5% of developers are happy at work. 47.1% are complacent. 28.4% are actively unhappy.11 The top satisfaction factor is autonomy and trust.

DPE maps directly onto what keeps engineers engaged — autonomy through better tooling, trust when you eliminate surveillance metrics, and quality because the environment actually supports the work. On a 50-person team, preventing even one or two senior departures per year can pay for a DPE initiative on its own.

Where to Actually Start

You don’t need a large initiative. One person with focus and about 90 days can make a difference. There’s no three-step plan — every organization is different.

The main mistake is treating DPE as a one-off initiative. It’s a practice, like security reviews or performance testing, that needs persistent attention.

But you still need a starting point. Identify the top three pain points by asking, not guessing. A short survey or a few brief conversations across teams is usually enough. The answers often differ from leadership’s assumptions, and the fixes are usually smaller than expected.

Fix one issue and measure the result. Problems are normal: migrations take longer, process changes meet resistance, or metrics don’t move as expected. Use early wins to gain credibility for the next improvement. Bring data.

If you’re a staff or senior engineer who struggles to get leadership’s attention, the data here is for you. You’ve likely been doing DPE work already, even if it’s invisible. Now you have a framework and a name for it.

What’s the one thing on your team that makes you close your laptop and walk away? I’m genuinely curious.

References

  1. Cortex. (2024). State of Developer Productivity Report ↩

  2. Stack Overflow. (2024). Developer Survey 2024 — Most Common Frustrations  ↩

  3. Stack Overflow. (2024). Developer Survey 2024 — Time Spent Searching for Information ↩

  4. JetBrains. (2024). Developer Ecosystem Survey 2024 ↩

  5. JetBrains. (2025). Developer Ecosystem Survey 2025 ↩

  6. DORA. (2025). State of AI-Assisted Software Development ↩

  7. Noda, A., Storey, M.-A., Forsgren, N., Greiler, M. (2023). DevEx: What Actually Drives Productivity ↩

  8. GitLab. (2026). Global DevSecOps Report ↩

  9. DX. (n.d.). Measuring Developer Productivity with the DX Core 4 ↩

  10. DX. (n.d.). The One Number You Need to Increase ROI Per Engineer ↩

  11. Stack Overflow. (2025). Developer Survey 2025  ↩

Matrices in Python

To solve compute problems we use different type of data structures like array , maps, etc. One such data structure is matrices or 2D arrays. Matrices act as foundation for other data structure like graphs .

Define matrices in python

matrix = [
   [1, 2, 3],
   [4, 5, 6],
   [7, 8, 9]
]

Define a 3×3 matrix in python, row(R) equals 3 and column(C) equals 3

matrix_3x3 = [[0]* 3 for in range(3)]

There are popular problems in matrices:

Transpose a matrix:

def calculate_transpose_of_matrix(mat: list[list]) -> list[list]:
    N = len(mat)
    for i in range(N):
       for j in range(i+1, N):
           mat[i][j], mat[j][i] = mat[j][i], mat[i][j]

   return mat

Print matrix in spiral pattern:

def print_matrix_in_spiral(mat: list[list]) -> list[list]:
    R = len(mat)
    C = len(mat[0])

    top = 0
    left = 0
    bottom = R - 1
    right = C - 1

    while top <= bottom and left <= right:
        for i in range(left, (right + 1)):
            print(mat[top][i], end=" ")
        top += 1
        for i in range(top, (bottom + 1)):
            print(mat[i][right], end=" ")
        right -= 1
        if top <= bottom:
            for i in range(right, (left - 1), -1):
                print(mat[bottom][i], end=" ")
            bottom -= 1
        if left <= right:
            for i in range(bottom, (top - 1), -1):
                print(mat[i][left], end= " ")
            left += 1

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.

The AI landscape is experiencing unprecedented growth and transformation. This post delves into the key developments shaping the future of artificial intelligence, from massive industry investments to critical safety considerations and integration into core development processes.

Key Areas Explored:

  • Record-Breaking Investments: Major tech firms are committing billions to AI infrastructure, signaling a significant acceleration in the field.
  • AI in Software Development: We examine how companies are leveraging AI for code generation and the implications for engineering workflows.
  • Safety and Responsibility: The increasing focus on ethical AI development and protecting vulnerable users, particularly minors.
  • Market Dynamics: How AI is influencing stock performance, cloud computing strategies, and global market trends.
  • Global AI Strategies: Companies are adapting AI development for specific regional markets.

This deep dive aims to provide developers, tech leaders, and enthusiasts with a comprehensive overview of the current state and future trajectory of AI.

AI #ArtificialIntelligence #TechTrends #SoftwareEngineering #MachineLearning #CloudComputing #FutureOfTech #AISafety

Qodo Merge GitHub Integration: Automated PR Review Setup

Why Automate PR Reviews on GitHub with Qodo Merge

Pull request review is the biggest recurring time cost in most engineering teams. Every PR that sits waiting for a human reviewer represents blocked work, lost context, and accumulated merge conflicts. The problem scales with team size – a 20-developer team generating 15 PRs per day can easily accumulate hours of review backlog each week, especially when senior engineers are the bottleneck.

Qodo Merge addresses this by providing AI-powered PR review that triggers automatically when a pull request is opened on GitHub. Built on the open-source PR-Agent engine, Qodo Merge analyzes your diff, generates structured PR descriptions, posts code review comments with severity ratings, suggests improvements, and identifies test coverage gaps – all within minutes of the PR being created. The February 2026 release of Qodo 2.0 introduced a multi-agent architecture where specialized agents handle bug detection, security analysis, code quality, and test coverage simultaneously, achieving the highest F1 score (60.1%) among eight AI code review tools in benchmark testing.

This guide covers everything you need to set up Qodo Merge on GitHub – from installing the managed GitHub App to self-hosting PR-Agent via GitHub Actions, configuring review behavior with .pr_agent.toml, and using slash commands to interact with the AI reviewer directly in your pull requests. Whether you want the managed experience or full self-hosted control, you will have automated PR review running on your GitHub repositories by the end of this guide.

For a broader look at the Qodo platform including test generation and IDE features, see our Qodo review. For pricing details across all tiers, check our Qodo Merge pricing breakdown.

Qodo screenshot

Option 1: Install the Qodo Merge GitHub App

The fastest way to get Qodo Merge running on GitHub is through the managed GitHub App. This requires no CI/CD configuration, no Docker containers, and no API key management. Qodo handles the infrastructure, and you get reviews on your PRs within minutes of installation.

Step 1: Sign up and authorize

  1. Navigate to qodo.ai and click Sign Up or Get Started
  2. Choose GitHub as your sign-in method
  3. GitHub displays an OAuth authorization page asking you to grant Qodo permission to read your profile information
  4. Click Authorize to proceed

After authorization, you land on the Qodo dashboard where you can manage repositories, view review activity, and configure organization-level settings.

Step 2: Install the GitHub App

The GitHub App installation is separate from the OAuth sign-in. The OAuth grants Qodo access to your identity, while the App installation grants it access to your repositories and pull request events.

  1. From the Qodo dashboard, click Add Repositories or Install GitHub App
  2. GitHub displays the App installation page with the required permissions:
    • Read access to repository contents, metadata, and pull requests
    • Write access to pull request comments, issues, and checks
    • Webhook subscriptions for pull request creation and update events
  3. Choose your installation scope:
    • All repositories – Qodo Merge reviews PRs on every repository in the organization, including repositories created in the future
    • Only select repositories – choose specific repositories from a list and add more later as needed
  4. Click Install to complete the process

For GitHub organizations, the installation may require organization owner approval. If you see a Request button instead of Install, the request is forwarded to your organization owner. Ask them to approve the Qodo app under Settings, then Third-party access.

Step 3: Verify the installation

Once the GitHub App is installed, verify it works by opening a pull request on one of the enabled repositories.

  1. Create a branch and make a code change
  2. Push the branch and open a pull request targeting your default branch
  3. Wait 2 to 4 minutes for Qodo Merge to analyze the diff

Qodo Merge posts several items on the PR:

  • A PR description summarizing the changes, categorized by type (bug fix, feature, refactor, etc.)
  • A code review with structured findings organized by severity – bugs, security issues, code quality, and best practices
  • Code improvement suggestions with specific, actionable recommendations for each finding

If nothing appears after 5 minutes, check the troubleshooting section at the end of this guide.

Option 2: Self-Host PR-Agent via GitHub Actions

For teams that want full control over the review pipeline – or need to keep source code within their own infrastructure – self-hosting PR-Agent via GitHub Actions is the best approach. PR-Agent is the open-source engine behind Qodo Merge, available under the Apache 2.0 license on GitHub. You bring your own LLM API keys, and the review runs entirely within your GitHub Actions environment.

This is the most popular deployment method among teams using the open-source version of Qodo Merge, and it is the approach our Qodo Merge review recommends for budget-conscious teams.

Prerequisites

Before setting up the GitHub Action, you need:

  • A GitHub repository with Actions enabled
  • An OpenAI API key (or an API key for another supported LLM provider like Anthropic, Azure OpenAI, or Hugging Face)
  • Repository admin access to add secrets and workflow files

Create the workflow file

Create a new file at .github/workflows/pr-agent.yml in your repository:

name: PR-Agent

on:
  pull_request:
    types: [opened, reopened, ready_for_review, review_requested]
  issue_comment:
    types: [created]

jobs:
  pr_agent:
    if: ${{ github.event_name == 'pull_request' || (github.event_name == 'issue_comment' && startsWith(github.event.comment.body, '/')) }}
    runs-on: ubuntu-latest
    permissions:
      issues: write
      pull-requests: write
      contents: read
    name: Run PR-Agent
    steps:
      - name: PR-Agent
        id: pr_agent
        uses: Codium-ai/pr-agent@main
        env:
          OPENAI_KEY: ${{ secrets.OPENAI_KEY }}
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          github_action_config.auto_review: "true"
          github_action_config.auto_describe: "true"
          github_action_config.auto_improve: "true"

Add your API key as a repository secret

  1. Go to your repository on GitHub
  2. Navigate to Settings, then Secrets and variables, then Actions
  3. Click New repository secret
  4. Set the name to OPENAI_KEY and paste your OpenAI API key as the value
  5. Click Add secret

The GITHUB_TOKEN secret is automatically provided by GitHub Actions and does not need to be configured manually. It grants the workflow permission to post comments on pull requests.

Understanding the workflow configuration

The workflow triggers on two event types:

  • pull_request events (opened, reopened, ready_for_review, review_requested) – these trigger the automatic review, description, and improvement commands
  • issue_comment events – these enable slash commands like /review, /describe, and /improve that you type as PR comments

The three github_action_config environment variables control which commands run automatically:

Variable What it does
auto_review Automatically posts a structured code review when a PR is opened
auto_describe Automatically generates a PR description summarizing the changes
auto_improve Automatically posts code improvement suggestions

Set any of these to "false" if you want to trigger them manually via slash commands instead.

Test the setup

Commit the workflow file to your default branch, then open a new pull request. Within 2 to 4 minutes (depending on the diff size and LLM response time), PR-Agent should post its review comments on the PR. Check the Actions tab in your repository to see the workflow run and troubleshoot any errors.

Using Slash Commands for Interactive Review

One of the most powerful features of the Qodo Merge GitHub integration is the ability to interact with the AI reviewer using slash commands. These work with both the managed GitHub App and the self-hosted PR-Agent setup (as long as the issue_comment trigger is configured in your workflow).

Type any slash command as a comment on an open pull request, and Qodo Merge processes the request and responds with structured output.

/review – Full code review

The /review command triggers a comprehensive code review of the PR diff. The reviewer analyzes the changes for bugs, security vulnerabilities, code quality issues, and best practice violations.

/review

The output is a structured review with findings categorized by severity and type. Each finding includes a description, the relevant code snippet, and a suggested fix where applicable. You can also use /review --extended for a more thorough analysis that examines additional dimensions like error handling completeness and edge case coverage.

/describe – Generate PR description

The /describe command generates or updates the PR description based on the diff content.

/describe

Qodo Merge produces a structured description including a summary of what the changes do, the type of change (bug fix, feature, refactor, documentation), a file-by-file walkthrough, and optional labels. This is particularly useful for PRs opened by developers who leave the description blank or write minimal notes.

/improve – Code improvement suggestions

The /improve command identifies specific code improvements and presents them as actionable suggestions.

/improve

Each suggestion includes the file path, line number, the current code, a recommended change, and an explanation of why the change improves the code. Suggestions typically cover performance optimizations, readability improvements, and pattern adherence.

/ask – Ask questions about the PR

The /ask command lets you ask natural language questions about the pull request.

/ask What is the impact of this change on the existing authentication flow?

Qodo Merge analyzes the diff in context and provides a detailed answer. This is useful for understanding complex changes, especially when reviewing someone else’s PR.

/update_changelog – Generate changelog entries

/update_changelog

This command generates changelog entries based on the PR changes. It categorizes changes into sections like Added, Changed, Fixed, and Removed, following the Keep a Changelog format.

/add_docs – Generate documentation

/add_docs

Generates docstrings and inline documentation for functions and classes modified in the PR. The documentation follows the conventions of the target language – JSDoc for JavaScript, docstrings for Python, Javadoc for Java, and so on.

/test – Generate test suggestions

/test

Identifies untested code paths in the PR and suggests test cases with descriptions and expected behavior. This command does not generate complete test files (that is handled by Qodo Gen in the IDE), but it outlines what tests should be written to cover the new or modified code.

Configuring Review Behavior with .pr_agent.toml

The .pr_agent.toml file is the configuration center for Qodo Merge and PR-Agent. Place it in the root of your repository to customize how the AI reviewer behaves for that specific repository. This configuration works with both the managed GitHub App and self-hosted PR-Agent.

Basic configuration

Here is a starter configuration that covers the most commonly customized settings:

[pr_reviewer]
num_code_suggestions = 4
inline_code_comments = true
require_estimate_effort_to_review = true
extra_instructions = """
Focus on security vulnerabilities, logic errors, and missing error handling.
Do not comment on code style or formatting - our linter handles that.
"""

[pr_description]
publish_labels = true
use_bullet_points = true
add_original_user_description = true
include_generated_by_header = true

[pr_code_suggestions]
num_code_suggestions = 4
extra_instructions = """
Prioritize suggestions that fix potential bugs or improve error handling.
"""

[github_action_config]
auto_review = true
auto_describe = true
auto_improve = true

Excluding files from review

Use the ignore configuration to prevent Qodo Merge from reviewing files that do not benefit from AI analysis:

[ignore]
glob = [
  "*.lock",
  "*.generated.*",
  "dist/**",
  "node_modules/**",
  "*.min.js",
  "coverage/**",
  "__snapshots__/**",
  "*.svg",
  "*.png",
  "*.jpg"
]

This prevents the reviewer from wasting tokens and time on lock files, build output, generated code, minified files, and binary assets.

Custom labels

Define custom labels that Qodo Merge applies automatically based on the content of the PR:

[pr_description]
publish_labels = true

[pr_description.custom_labels.security]
description = "PR includes changes to authentication, authorization, or security-sensitive code"

[pr_description.custom_labels.database]
description = "PR includes database migrations, schema changes, or query modifications"

[pr_description.custom_labels.api]
description = "PR modifies public API endpoints or request/response schemas"

[pr_description.custom_labels.breaking-change]
description = "PR introduces a breaking change to an existing API or interface"

Adjusting review depth

Control how thorough the review is by adjusting the reviewer configuration:

[pr_reviewer]
# Number of code suggestions to generate (1-10)
num_code_suggestions = 6

# Include effort estimation for the review
require_estimate_effort_to_review = true

# Include security analysis in every review
require_security_review = true

# Ask the reviewer to identify testing gaps
require_tests = true

# Maximum number of files to review per PR (0 = unlimited)
max_files = 0

Higher values for num_code_suggestions produce more improvement recommendations but increase LLM token consumption and response time. For self-hosted PR-Agent, this directly affects your API costs. Start with 4 suggestions and increase if your team finds the output too brief.

Advanced GitHub Actions Configuration

Beyond the basic workflow, you can customize the PR-Agent GitHub Action for more complex scenarios.

Using a different LLM provider

PR-Agent supports multiple LLM providers. To use Anthropic’s Claude instead of OpenAI:

env:
  ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
  GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
  config.model: "anthropic/claude-sonnet-4-20250514"
  github_action_config.auto_review: "true"
  github_action_config.auto_describe: "true"
  github_action_config.auto_improve: "true"

Add your Anthropic API key as a repository secret, and set the config.model to the desired Claude model. PR-Agent also supports Azure OpenAI, Hugging Face, and other providers through LiteLLM.

Running only on non-draft PRs

If you want to skip draft pull requests to save API costs:

on:
  pull_request:
    types: [opened, reopened, ready_for_review]

jobs:
  pr_agent:
    if: ${{ !github.event.pull_request.draft }}
    runs-on: ubuntu-latest

The ready_for_review trigger ensures that PR-Agent runs when a draft PR is marked as ready, so reviews are still triggered at the right time.

Restricting slash commands to specific users

By default, anyone who can comment on a PR can trigger slash commands. To restrict this to specific users or teams:

jobs:
  pr_agent:
    if: >
      github.event_name == 'pull_request' ||
      (github.event_name == 'issue_comment' &&
       startsWith(github.event.comment.body, '/') &&
       contains(fromJson('["username1","username2"]'), github.event.comment.user.login))
    runs-on: ubuntu-latest

Replace username1 and username2 with the GitHub usernames of users who should have permission to trigger AI review commands.

Running PR-Agent with Docker directly

For teams that want more control over the PR-Agent version and configuration, you can run the Docker image directly instead of using the GitHub Action:

steps:
  - name: Run PR-Agent
    run: |
      docker run --rm 
        -e OPENAI_KEY=${{ secrets.OPENAI_KEY }} 
        -e GITHUB_TOKEN=${{ secrets.GITHUB_TOKEN }} 
        -e github_action_config.auto_review=true 
        -e github_action_config.auto_describe=true 
        -e github_action_config.auto_improve=true 
        codiumai/pr-agent:latest

This approach lets you pin to a specific PR-Agent version by replacing latest with a version tag.

Customizing Review Behavior for Your Team

Getting Qodo Merge installed is the first step. Making it useful for your specific codebase requires tuning the review behavior based on your team’s conventions, tech stack, and pain points.

Writing effective extra_instructions

The extra_instructions field in .pr_agent.toml accepts natural language that guides the AI reviewer. Here are examples for common scenarios:

For a Node.js backend service:

[pr_reviewer]
extra_instructions = """
This is a Node.js Express backend using PostgreSQL.
Focus on:
- SQL injection prevention (all queries must use parameterized statements)
- Authentication middleware on all protected routes
- Proper error handling with try/catch and error logging
- Input validation using Joi or Zod schemas
Do not comment on import ordering or semicolon usage - our ESLint config handles those.
"""

For a React frontend:

[pr_reviewer]
extra_instructions = """
This is a React 19 application using TypeScript and Tailwind CSS.
Focus on:
- Proper hook usage (dependency arrays, cleanup functions)
- Component re-render prevention (memoization where appropriate)
- Accessibility (ARIA attributes, keyboard navigation, alt text)
- Type safety (no 'any' types without justification)
Do not comment on CSS class ordering or component file structure.
"""

Combining Qodo Merge with existing tools

Most teams already have linters, formatters, and CI checks running on their PRs. Qodo Merge complements these tools by providing semantic analysis that rule-based tools cannot perform. A practical setup looks like this:

  • Linter (ESLint, Ruff, etc.) – handles formatting, style, and simple patterns
  • Static analysis (SonarQube, Semgrep) – handles deterministic bug detection and security scanning
  • Qodo Merge – handles AI-powered semantic review for logic errors, design issues, and contextual problems

Configure Qodo Merge to skip the categories that your other tools already cover:

[pr_reviewer]
extra_instructions = """
Do not comment on code formatting, import ordering, or naming conventions.
Our ESLint and Prettier configurations handle those concerns.
Focus exclusively on logic errors, security vulnerabilities, missing error
handling, and performance issues.
"""

This reduces comment noise and ensures that each tool in your pipeline contributes unique value rather than duplicating feedback.

Considering Alternatives

While Qodo Merge is a strong option for GitHub PR review, it is worth evaluating alternatives before committing to a tool. For a full list, see our guide to the best AI PR review tools.

CodeAnt AI is a Y Combinator-backed platform that bundles AI PR review with SAST security scanning, secret detection, infrastructure-as-code security, and DORA metrics in one tool. Pricing starts at $24/user/month for the Basic plan (PR review with line-by-line feedback and one-click auto-fixes) and goes to $40/user/month for the Premium plan that adds security scanning and engineering dashboards. For teams that want PR review plus security analysis without stitching together multiple tools, CodeAnt AI offers strong value at a competitive price point.

CodeRabbit at $24/user/month provides faster review times (approximately 90 seconds versus Qodo’s 2 to 4 minutes), a lower false positive rate, and 40+ built-in linters. Its free tier is more generous than Qodo’s, with unlimited repositories and hourly rate limits instead of a monthly cap.

The open-source PR-Agent route is the most cost-effective if your team has the DevOps capacity to manage the deployment. A 20-developer team self-hosting PR-Agent typically spends $20 to $80 per month on LLM API costs versus $600 per month for the Qodo Teams subscription. See our Qodo Merge pricing guide for a detailed cost comparison.

For teams evaluating broader alternatives to the Qodo platform, our Qodo alternatives guide covers the full landscape of competing tools.

Troubleshooting Qodo Merge on GitHub

Qodo Merge is not posting reviews

Check the GitHub App installation. Navigate to your GitHub organization settings (or personal account settings), then Applications. Verify that Qodo Merge (or PR-Agent) is listed and has access to the target repository.

Check the webhook deliveries. In your repository settings, go to Webhooks and examine the recent deliveries. Failed deliveries with 4xx or 5xx response codes indicate a connectivity issue between GitHub and Qodo’s servers. For self-hosted PR-Agent, check the GitHub Actions workflow logs instead.

Check if the PR is a draft. Qodo Merge skips draft pull requests by default. Mark the PR as ready for review or configure draft PR handling in your .pr_agent.toml:

[github_action_config]
handle_draft_pr = true

Check the free tier limits. The Qodo free Developer plan caps at 30 PR reviews per month for the entire organization. If your team has exceeded this limit, reviews stop until the next billing cycle. Check the Qodo dashboard for your current usage.

Check the Actions workflow (self-hosted only). Go to the Actions tab in your repository and look for failed PR-Agent runs. Common failures include expired or invalid API keys, rate limiting from the LLM provider, and permission errors on the GITHUB_TOKEN.

Slash commands are not working

Verify the issue_comment trigger. Your GitHub Actions workflow must include issue_comment as a trigger event for slash commands to work. If you only have pull_request triggers, slash commands will not be processed.

Check the conditional logic. The workflow job needs an if condition that allows issue_comment events where the comment body starts with /. Review the workflow example in this guide and confirm your condition matches.

Verify permissions. The workflow needs issues: write and pull-requests: write permissions to post response comments. Check the permissions block in your workflow file.

Reviews are too noisy

Add file exclusions. Configure the [ignore] section in .pr_agent.toml to skip files that generate low-value comments – lock files, generated code, minified assets, and test snapshots.

Reduce the number of suggestions. Lower the num_code_suggestions value in the [pr_reviewer] and [pr_code_suggestions] sections. Start with 3 to 4 and increase only if the team wants more.

Add targeted extra_instructions. Tell the reviewer what not to comment on. If your linter handles formatting and your team has agreed on naming conventions, exclude those categories explicitly.

Disable automatic commands. If auto_review, auto_describe, and auto_improve are all enabled, each PR gets three separate comment threads. Disable the ones your team does not find useful:

[github_action_config]
auto_review = true
auto_describe = true
auto_improve = false

Configuration file is not being read

Verify the file name. The file must be named exactly .pr_agent.toml in the repository root. Variations like pr_agent.toml (without the leading dot) or .pr_agent.yaml (wrong format) are not recognized.

Verify the file is on the default branch. PR-Agent reads the configuration from the default branch, not from the PR branch. Merge your .pr_agent.toml to main (or your default branch) before testing.

Validate the TOML syntax. Use a TOML validator to check for syntax errors. Common mistakes include missing quotes around string values, incorrect table headers, and improperly escaped special characters.

Summary

Setting up Qodo Merge on GitHub gives your team automated PR reviews that catch bugs, security vulnerabilities, and code quality issues before human reviewers spend time on the pull request. The managed GitHub App is the fastest path – install it, select your repositories, and reviews start appearing within minutes. The self-hosted PR-Agent approach via GitHub Actions offers the same core review capabilities at a fraction of the cost, with full control over your data and infrastructure.

The slash commands – /review, /describe, /improve, /ask, and others – turn the AI reviewer into an interactive tool rather than a passive commenter. Combined with the .pr_agent.toml configuration file, you can tailor the review behavior to match your team’s coding standards, tech stack, and quality priorities.

Whether you choose the managed app or the self-hosted route, the goal is the same: every pull request gets immediate, structured feedback so that human reviewers can focus on architecture, design decisions, and the contextual questions that genuinely require human judgment.

Further Reading

  • How to Set Up Qodo AI in JetBrains (IntelliJ, PyCharm, WebStorm)
  • How to Set Up Qodo AI in VS Code: Installation Guide
  • Best AI Code Review Tools in 2026 – Expert Picks
  • Best AI Test Generation Tools in 2026: Complete Guide
  • CodiumAI Alternatives: Best AI Tools for Automated Testing in 2026

Frequently Asked Questions

How do I install Qodo Merge on GitHub?

There are two ways to install Qodo Merge on GitHub. The first is the managed Qodo Merge GitHub App, which you install from the Qodo website by signing in with your GitHub account, authorizing the app, and selecting your repositories. The second is self-hosting PR-Agent via GitHub Actions by creating a workflow file in your repository, adding your LLM API key as a GitHub secret, and configuring the action to trigger on pull request events. The GitHub App is the faster option and requires no CI configuration, while the GitHub Actions approach gives you full control over the review pipeline and avoids sending code to Qodo’s servers.

What slash commands does Qodo Merge support on GitHub?

Qodo Merge supports several slash commands that you type as PR comments. The main commands are /review to trigger a full code review, /describe to generate or update the PR description, /improve to get code improvement suggestions, /ask followed by a question to ask about the PR, /update_changelog to generate changelog entries, /add_docs to generate documentation for changed code, and /test to generate test suggestions for modified functions. Each command triggers a specific AI agent that analyzes the PR diff and posts structured feedback directly on the pull request.

Is Qodo Merge free for GitHub repositories?

Yes, in two ways. The Qodo free Developer plan includes 30 PR reviews per month at no cost, and it works with GitHub repositories through the managed GitHub App. Alternatively, you can self-host PR-Agent – the open-source engine behind Qodo Merge – for free using GitHub Actions with your own LLM API keys. The self-hosted option has no review limits and no subscription cost. The only expense is LLM API usage, which typically costs a few cents per review depending on the diff size.

How do I set up PR-Agent as a GitHub Action?

Create a workflow file at .github/workflows/pr-agent.yml in your repository. Set the trigger to pull_request events including opened, reopened, ready_for_review, and review_requested types. Use the Codium-ai/pr-agent Docker image as the action, pass your OpenAI API key as a secret named OPENAI_KEY, and specify the PR-Agent commands you want to run – typically github_action with auto_review, auto_describe, and auto_improve enabled in the configuration. Commit the workflow file to your default branch and open a pull request to test it.

What is the difference between the Qodo Merge GitHub App and self-hosted PR-Agent?

The Qodo Merge GitHub App is a managed service that handles all infrastructure, provides the multi-agent review architecture from Qodo 2.0, includes the context engine for multi-repo intelligence, and offers SOC 2 compliance. Self-hosted PR-Agent is the open-source version that runs in your own GitHub Actions pipeline with your own LLM API keys. It includes core review features like describe, review, and improve commands but does not include the multi-agent architecture, context engine, analytics dashboard, or managed hosting. The GitHub App is easier to set up but costs $30/user/month on the Teams plan, while self-hosted PR-Agent is free aside from LLM API costs.

How do I configure Qodo Merge review behavior on GitHub?

Create a .pr_agent.toml file in your repository root. This TOML configuration file controls review behavior including which commands run automatically on PR events, the number of code suggestions, whether to include effort estimation, which files to exclude from review, and custom prompts for the AI reviewer. Configuration options include pr_reviewer settings for review depth and focus, pr_description settings for auto-generated descriptions, and pr_code_suggestions settings for improvement recommendations. Changes to the configuration file take effect on the next pull request without any restart.

Can I run Qodo Merge only on specific GitHub repositories?

Yes. When installing the Qodo Merge GitHub App, you can choose between granting access to all repositories in your organization or selecting specific repositories from a list. You can change this selection at any time in your GitHub organization settings under Applications. For the self-hosted PR-Agent approach via GitHub Actions, you control exactly which repositories run the review by adding or removing the workflow file from each repository. This gives you granular control over which repos get automated PR review.

Why is Qodo Merge not reviewing my GitHub pull requests?

Check these common causes in order. First, verify the GitHub App is installed on the repository by checking your GitHub organization settings under Applications. Second, check if the PR is a draft – Qodo Merge may skip draft PRs by default. Third, verify you have not exceeded the free tier limit of 30 PR reviews per month. Fourth, check your .pr_agent.toml configuration for path exclusions that might be filtering out the changed files. Fifth, check the GitHub webhook deliveries in your repository settings to ensure webhooks are being delivered successfully. For self-hosted PR-Agent, also verify that your LLM API key is valid and your GitHub Actions workflow is running without errors.

Does Qodo Merge work with GitHub Enterprise?

Yes. Qodo Merge supports GitHub Enterprise through both the managed GitHub App and self-hosted PR-Agent. For GitHub Enterprise Cloud, the standard GitHub App installation process works. For GitHub Enterprise Server (self-hosted GitHub), self-hosting PR-Agent via GitHub Actions is the recommended approach since the Qodo managed app may not be able to reach your internal GitHub instance. The Enterprise plan also supports on-premises and air-gapped deployments for organizations with strict network isolation requirements.

How do I customize the PR description generated by Qodo Merge?

Configure the pr_description section in your .pr_agent.toml file. You can set publish_labels to control whether labels are added, use_bullet_points to switch between paragraph and bullet-point format, add_original_user_description to preserve the original PR description alongside the AI-generated one, and include_generated_by_header to show or hide the attribution line. You can also add custom_labels with name, description pairs to define your own labeling taxonomy that Qodo Merge applies automatically based on the PR content.

What alternatives to Qodo Merge are available for GitHub PR review?

The main alternatives for GitHub PR review are CodeRabbit at $24/user/month with 40+ built-in linters and a generous free tier, CodeAnt AI starting at $24/user/month with bundled SAST and DORA metrics, GitHub Copilot code review at $19/user/month built natively into GitHub, and Sourcery with Python-focused review. CodeRabbit is the most popular alternative with faster review times (90 seconds vs 2-4 minutes) and lower false positive rates. CodeAnt AI bundles the most security features at its price point. For a comprehensive comparison, see our guide to the best AI PR review tools.

How do I use the /review command in Qodo Merge?

Type /review as a comment on any open pull request in a repository where Qodo Merge is installed. The AI agent analyzes the full diff, checks for bugs, security vulnerabilities, code quality issues, and best practice violations, then posts a structured review with categorized findings and severity ratings. You can add flags to customize the review – for example, /review –extended to get a more thorough analysis. The review typically completes within 2 to 4 minutes. You can trigger /review multiple times on the same PR, and each run analyzes the latest state of the diff.

Can I use Qodo Merge alongside other GitHub review tools?

Yes. Qodo Merge runs independently and does not conflict with other review tools like CodeRabbit, SonarQube, Semgrep, or GitHub’s built-in code review. Each tool posts its own comments on the pull request. Many teams layer Qodo Merge for AI-powered semantic review alongside a static analysis tool like SonarQube for deterministic rule enforcement. The only consideration is that multiple AI review tools posting on the same PR can create comment noise, so most teams choose one AI reviewer and complement it with rule-based analysis tools.

Originally published at aicodereview.cc