KVerify: A Two-Year Journey to Get Validation Right

KVerify: A Two-Year Journey to Get Validation Right

In December 2023, I wrote a small article about a utility I called ValidationBuilder. The idea was simple: a DSL where you’d call validation rules as extension functions on property references, collect all violations in one pass, and get a result back. Ktor-specific, but the concept was portable.

I published it and moved on.

Except I didn’t.

The Problem

I came to Kotlin without a Java background. Spring was my first serious framework, and I didn’t really understand it.

My issue was specific: I was reaching for Kotlin object declarations where Spring wanted managed beans. I was hardcoding configuration values that should have been injected, not because I didn’t know configuration existed, but because I couldn’t figure out how to bridge Spring’s dependency injection with Kotlin’s object. I was working around my own knowledge gaps without realizing that’s what I was doing. Everything ran, so I assumed everything was fine.

Eventually I moved to Ktor. Deliberately — I wanted less magic, more control. What I got instead was a different kind of overwhelming. Ktor gives you primitives and expects you to build the structure yourself. No built-in validation, no standard error handling, no guardrails. You figure it out or you don’t.

That turned out to be one of the better things that happened to me as a developer. Without a framework making decisions for me, I had to actually learn why certain patterns exist. Architecture, separation of concerns, what makes code maintainable — I learned it the hard way, by building things from scratch and getting them wrong.

But the validation problem stayed unsolved. Hand-written if checks everywhere. Error messages as hardcoded strings. No reuse, no structure. I looked at Konform and Valiktor. Valiktor was already abandoned. Konform was actively maintained and had a reasonable DSL — but I looked at it, decided it would take too long to learn properly, and moved on. Which, in hindsight, is a funny reason to spend the next two years building my own library from scratch.

I didn’t have precise words for what was wrong. I just knew the answer wasn’t there.

So in December 2023, I wrote my own.

The Library

Seven months later, in July 2024, I started a proper repository. Not because I had a clear vision of what the library should be — I just wanted to stop copying the same validation code between projects. A few lines of dependency instead of manually replicating the ValidationBuilder every time.

That was the entire plan.

NamedValue

The library started with a concept called NamedValue — a simple wrapper: a value and a name, travelling together.

data class NamedValue<T>(val name: String, val value: T)

The reason was practical. Validation error messages need to reference the field that failed — “password must not be blank”, “age must be at least 18”. Without NamedValue, you’d have to pass the field name to every rule call manually. With it, the name was always there, carried alongside the value, available to any rule that needed it. Rules were just extension functions on NamedValue<T>:

password.named("password").notBlank().ofLength(8..255)

It felt clean. The infix named read naturally, the rules chained, and the name showed up in every error message automatically.

The problem was that it had quietly made a decision I hadn’t noticed yet — it tied the field name to the value itself, rather than to the validation context. That distinction wouldn’t matter for a while. And then it would matter enormously.

The Iterations

The 1.x releases came quickly. November, December 2024 — versions shipping, rules working, NamedValue doing its job. On paper, the library was functional.

But something kept feeling off. The ValidationContext had quietly accumulated weight:

fun validate(message: String, predicate: Predicate)
fun validate(rule: Rule<Unit>)
fun <T> T.validate(vararg rules: Rule<T>): T
fun <T> NamedValue<T>.validate(message: MessageCallback<T>, predicate: ValuePredicate<T>): NamedValue<T>
fun <T> NamedValue<T>.validate(ruleCallbackWithMessage: NamedValueRuleCallback<T>): NamedValue<T>

Five ways to validate. Type aliases for every possible callback shape. Each one added to solve a real problem — and together they made the interface feel like it was trying to be everything at once.

The fix was to stop and ask: what does a validation context actually need to do? The answer was one thing: decide what happens when a rule fails.

interface ValidationContext {
    fun onFailure(violation: Violation)
}

That’s it. Everything else was the rule’s problem, not the context’s.

This pattern repeated itself throughout 2025. Build something, feel the weight accumulate, find the one thing it actually needed to do, cut everything else. The library kept getting smaller. Each simplification felt right for a week, then revealed the next thing that was still wrong.

And through all of it, NamedValue stayed. Untouched. Seemingly fine.

5 AM

This was the state of things through most of 2025. Iterating. Releasing. Something always slightly wrong.

There were nights where I’d wake up at five — suddenly, with something already forming in my head. Not a diagram or a grand insight. Just code. An API shape. A specific thing that had been bothering me that now had an answer.

On July 14, 2025, that happened at 5:29 AM. I made two commits — a package restructuring I had been putting off for days — and started my day four hours early. I napped later and went to bed earlier that night.

The sleep during those periods wasn’t great. Not quite fever dreams, but something adjacent — restless, with the problem still running somewhere in the background. I wasn’t suffering dramatically. I was just genuinely stuck on something, and my brain apparently didn’t get the memo that work was supposed to stop.

Spring, Revisited

In January 2026, I decided to learn Spring again. Not to go back to it — just to understand it better. And since I had a library now, I figured I’d try using it alongside Spring’s own validation.

The code I wrote looked like this:

fun validate(context: ValidationContext) =
    with(context) {
        ::title.toNamed().verifyWith(
            provider.namedNotBlank(),
            provider.namedLengthBetween(3..255) { violation("Title must be between 3 and 255 characters long") }
        )
    }

Spring’s version, sitting right above it in the same file:

@NotBlank
@Length(min = 3, max = 255)
val title: String

I wasn’t trying to beat Spring at its own game — annotations are a different tool for a different philosophy. But looking at both side by side, something became impossible to ignore. The toNamed() call on every property. The provider object. The fact that adding a custom reason meant constructing a violation manually and losing the field name in the process. The constant risk of accidentally applying a regular rule to a named value or vice versa.

I had been living inside this DSL for over a year. I had stopped seeing it. Comparing it to anything — even Spring, which I had originally left — made the friction visible again.

NamedValue wasn’t a small ergonomics issue. It was a load-bearing flaw. And it had been there from the beginning.

The Decision

By January 2026, the problem had a name.

Not NamedValue specifically — the concept behind it. Rules and naming metadata were coupled. A rule for a named value was a different type than a rule for a plain value. That meant two rule sets, two providers, two of everything. Users had to constantly think about which kind of rule to reach for. The maintenance cost was real and growing.

I knew these two things shouldn’t coexist. What I didn’t know was how to separate them.

I had asked AI assistants about this problem before. Multiple times. Nothing useful came back — or maybe it did and I wasn’t ready to hear it yet. But one day in January 2026, I described the problem again, and the answer that came back was something I had been circling for two years without landing on: put the metadata in the context, not in the rule or the value. Rules validate values. Context carries where those values live.

I knew it was right immediately. Not because the AI said so — but because two years of the wrong model had made the right one recognizable the moment I saw it.

NamedValue was removed. Not in a single dramatic commit — it was simply no longer needed. The path would live in the context. Property references would put it there automatically. Rules would stay pure.

But “the context” now meant something different. The old ValidationContext — the interface that decided what to do on failure — was renamed to ValidationScope. A new ValidationContext took its place: a pure, immutable carrier of metadata. Same name, completely different responsibility. The scope executes. The context carries.

What followed was three months of more progress than the previous two years combined.

The New Context

Once NamedValue was gone, a new question appeared immediately: if the path lives in the context, how does the context store it?

The first attempt was a linked list — each context node held one path segment and a reference to its parent. Clean in theory, recursive in practice. It worked but felt fragile for deep nesting.

Two days later, I replaced it with something more ambitious — a design modeled directly after Kotlin’s CoroutineContext. Elements stored by key, retrievable by type, composable with +. It looked elegant:

interface ValidationContext {
    operator fun <E : Element> get(key: Key<E>): E?
    operator fun plus(context: ValidationContext): ValidationContext

    interface Key<E : Element>
    interface Element : ValidationContext {
        val key: Key<*>
    }
}

It lasted twenty-two days.

The flaw was fundamental. CoroutineContext uses key-based replacement — if you add an element with a key that already exists, it replaces the old one. That’s the right behavior for something like a coroutine dispatcher, where you want exactly one. But a validation path needs multiple NamePathElements to coexist. A path like user.friends[0].user requires the same element type to appear multiple times — key replacement silently destroyed them.

The next attempt dropped the key system entirely. Elements became a plain list, retrieved by type filtering. Simpler, but every + operation allocated a new list. Fine for shallow contexts, wasteful for deep ones.

Then fold replaced the list — a binary tree of CombinedContext(left, right) nodes, traversed lazily. No intermediate allocations. Better.

Then, two weeks later, one more simplification: replace fold with Iterable. Not because fold was wrong — but because Iterable is something every Kotlin developer already understands. filter, filterIsInstance, for loops — all of it works immediately, with no new protocol to learn. The context became something you could hand to any standard library function without explanation.

interface ValidationContext : Iterable<ValidationContext.Element>

That was the final form. Six weeks, five implementations, one interface.

The Rules Problem

There was one more problem to solve.

The scope controlled what happened on failure — collect, throw, or anything else. But “anything else” was the interesting part. What if you wanted to stop after the first violation and skip all remaining rules? The scope needed to intercept rule execution to do that.

So rules came back as an explicit abstraction. A rule received the context and the value, ran a check, and returned a violation or null:

interface Rule<in T> {
    fun check(context: ValidationContext, value: T): Violation?
}

The scope would call it, inspect the result, and decide what to do next:

fun <T> enforce(rule: Rule<T>, value: T) {
    val violation = rule.check(validationContext, value) ?: return
    onFailure(violation)
}

This worked — until you tried to extend a scope with additional context. Kotlin’s by delegation copies the implementation from the delegated object, not the receiver. So when ContextExtendedValidationScope delegated to the original scope, enforce ran with the original scope’s validationContext — not the overridden one. The extended context, the one with the path segment you just added, was completely invisible to the rule.

internal class ContextExtendedValidationScope<out T : ValidationScope>(
    val originalValidationScope: T,
    val additionalContext: ValidationContext,
) : ValidationScope by originalValidationScope {
    override val validationContext: ValidationContext
        get() = originalValidationScope.validationContext + additionalContext
}

The overridden validationContext was there. The enforce implementation just never called it.

I tried to fix the delegation chain. There was no clean way.

The solution was to stop passing context and value to the rule entirely. A rule is just a check — it closes over whatever it needs from the surrounding code. By the time enforce is called, the closure already has the right scope, the right context, the right value:

fun interface Rule {
    fun check(): Violation?
}

scope.enforce {
    if (value.isBlank()) NotBlankViolation(
        validationPath = scope.validationContext.validationPath(),
        reason = "must not be blank",
    ) else null
}

value and scope are already in the closure. The rule doesn’t need parameters. The delegation problem disappeared — because there was nothing left to delegate except the outcome.

Polishing

By late February 2026, the architecture was right. What remained was a different kind of work — not building, but removing.

The guiding principle was simple: leave only the shape. Strip everything down to the minimal core, then test it, then add back only what proved it deserved to exist. Any helper function that wasn’t load-bearing — gone. Any abstraction that existed for convenience rather than necessity — gone.

The commit messages tell the story plainly. reduce public API surface to minimal stable core. replace class-based Rule hierarchy with extension functions. Rule composition operators — and, or, ! — removed. The Verification interface removed, kept as a concrete class. FirstViolationValidationScope removed entirely.

It wasn’t painful. Overshipping and dealing with it after a release felt far worse than cutting now. A public API is a promise — every method you expose is something you have to maintain, something users will depend on, something you can’t easily take back. The smaller the surface, the more deliberate every addition has to be.

What remained after the cuts was small enough to hold in your head. Clean enough to document properly. Stable enough to release.

On March 23, 2026 — two years and nine months after the first commit — KVerify 2.0.0 shipped.

The Comparison

This is the core of KVerify on its first commit, July 13, 2024:

// Collect all violations
inline fun collectViolations(block: CollectAllContext.() -> Unit): ValidationResult

// Fail on first violation
inline fun failFast(block: FailFastContext.() -> Unit): ValidationResult

// Rules as extension functions
fun NamedString.notBlank(message: String = "$name must not be blank"): NamedString =
    validate(message) { it.isNotBlank() }

// Violations collected into a result
val isValid get() = this is ValidationResult.Valid
inline fun ValidationResult.onInvalid(block: (List<ValidationException>) -> Unit): ValidationResult
inline fun ValidationResult.onValid(block: () -> Unit): ValidationResult

And this is KVerify 2.0.0:

// Collect all violations
inline fun validateCollecting(block: CollectingValidationScope.() -> Unit): ValidationResult

// Fail on first violation
inline fun validateThrowing(block: ThrowingValidationScope.() -> T): T

// Rules as extension functions
fun Verification<String>.notBlank(reason: String? = null): Verification<String> =
    apply {
        scope.enforce {
            if (value.isBlank()) NotBlankViolation(
                validationPath = scope.validationContext.validationPath(),
                reason = reason ?: "Value must not be blank",
            ) else null
        }
    }

// Violations collected into a result
val isValid: Boolean get() = violations.isEmpty()
inline fun ValidationResult.onInvalid(block: (List<Violation>) -> Unit)
inline fun ValidationResult.onValid(block: () -> Unit)

The user-facing API changed completely. The underlying shape did not.

Two strategies — collect or throw. Rules as extension functions. A result you can branch on. onValid, onInvalid. These were all there on day one. Not because the first version was well-designed — it wasn’t. But because the instinct pointing toward them was right.

What two years bought was the understanding of why they were right. The NamedValue coupling, the overengineered contexts, the delegation trap, the CoroutineContext detour — none of it was wasted. Each wrong turn made the correct shape more recognizable. By the time the right architecture appeared, it was obvious. Not because it was simple — because everything else had already been tried.

Experience isn’t knowing the answer. It’s having been wrong enough times to recognize it when you finally see it.

What it looks like today

data class Address(val street: String, val city: String, val postalCode: String)
data class RegisterRequest(val username: String, val email: String, val age: Int, val address: Address)

fun ValidationScope.validate(request: RegisterRequest) {
    verify(request::username).notBlank().minLength(3).maxLength(20)
    verify(request::email).notBlank()
    verify(request::age).atLeast(18)

    pathName("address") {
        verify(request.address::street).notBlank()
        verify(request.address::city).notBlank()
        verify(request.address::postalCode).exactLength(5)
    }
}

val result = validateCollecting { validate(request) }

result.violations
    .filterIsInstance<PathAwareViolation>()
    .forEach { println("${it.validationPath}: ${it.reason}") }

If any of this resonated — the frustration with existing tools, the design problems, or just the library itself — KVerify is on GitHub. A star goes a long way for a solo project. And if you try it and something feels wrong, open an issue. Two years of iteration taught me that the best feedback comes from someone actually using it.

Getting Data from Multiple Sources in PowerBI: A Practical Guide to Modern Data Integration

INTRODUCTION

According to Microsoft, Power BI is a complete reporting solution that offers data preparation, data visualization, distribution, and management through development tools and an online platform. Power BI can scale from simple reports using a single data source to reports requiring complex data modeling and consistent themes. Use Power BI to create visually stunning, interactive reports to serve as the analytics and decision engine behind group projects, divisions, or entire organizations.
The foundation of every successful Power BI report is reliable data ingestion. Before a report can be successfully created, ability to extract data from various data sources is the first crucial step to building an effective report. Interacting with SQL Server is different from Excel, so learning the nuances of how data connection from different sources works is important in order to be able to use other PowerBI tools for effective decision making.
In most real-world business contexts, data is typically spread across multiple sources rather than confined to one. A data analyst may need to integrate data from Excel files, CSVs, SQL Server databases, PDFs, JSON APIs, and SharePoint folders into a unified report. Power BI is well-equipped for this task, offering powerful tools like Get Data and Power Query to efficiently connect, combine, and transform data from various sources. This guide explores how Power BI enables multi-source data integration and provides a step-by-step approach to implementing it effectively.
In this guide, you will learn how to:
• Connect Power BI to multiple data sources efficiently
• Use Power Query to preview and explore your data
• Detect and resolve data quality issues early
• Build a strong foundation for accurate data modeling and reporting.

Architecture Overview

At a high level, Power BI follows a layered architecture which consists of:
• Power BI Desktop as the reporting and modeling tool
• Multiple data sources, including:
o Excel and Text/CSV files
o SQL Server databases
o JSON and PDF files
o SharePoint folders
All data flows into Power BI through Power Query, where it is reviewed and prepared before loading into the data model.
Connecting Data from Multiple Sources
Power BI allows you to connect to a wide range of data sources. Below are step-by-step guides for each major source.

Step 1: Connecting to Excel

  1. Open Power BI Desktop
    Image 1
  2. Navigate to Home → Get Data → Excel
    Image 2
  3. Browse and select your Excel file
    Image 3
  4. In the Navigator window, select the required sheets or tables
    Image 4
  5. Click Load (to import directly) or Transform Data (to clean first)

Step 2: Connecting to Text/CSV Files

  1. Open Power BI Desktop
  2. Navigate to Home → Get Data → Text/CSV
    Image 5
  3. Browse and select the CSV file (e.g., MultiTimeline.csv)
    Image 6
  4. Preview the dataset in the dialog window
    Image 7
  5. Click Load or Transform Data

Step 3: Connecting to PDF

  1. Open Power BI Desktop
  2. Navigate to Home → Get Data → PDF
    Image 8
  3. Select the PDF file
    Image 9
  4. Wait for Power BI to detect available tables
  5. Select the desired table(s)
  6. Click Load or Transform Data
    Image 10

Step 4: Connecting to JSON

  1. Open Power BI Desktop
  2. Navigate to Home → Get Data → JSON
    Image 11
  3. Select the JSON file or input API endpoint
    Image 12
    Image 13
  4. Load the data into Power Query
  5. Expand nested fields to structure the data properly
    Image 14
  6. Click Close & Apply

Step 5: Connecting to SharePoint Folder

  1. Open Power BI Desktop
    Image 15
  2. Navigate to Home → Get Data → SharePoint Folder
    Image 16
  3. Enter the SharePoint site URL
  4. Click OK and authenticate if required
    Image 17
  5. Select files from the folder
  6. Click Combine & Transform Data
    Image 18

Step 6: Connecting to MySQL Database

  1. Open Power BI Desktop
    Image 19
  2. Navigate to Home → Get Data → MySQL Database
    Image 20
  3. Enter the server name and database
    Image 21
  4. Provide authentication credentials and click connect
    Image 22
  5. Select the required tables
  6. Click Load or Transform Data
    Image 23

Step 7: Connecting to SQL Server

  1. Open Power BI Desktop
  2. Navigate to Home → Get Data → SQL Server
    Image 24
  3. Enter the server name (e.g., localhost)
    Image 25
  4. Leave the database field blank (or specify one if needed)
  5. Click OK
  6. Select authentication method (e.g., Windows credentials)
  7. In the Navigator pane, expand the database (e.g., AdventureWorksDW2020)
  8. Select required tables such as:
    o DimEmployee
    o DimProduct
    o DimAccount
  9. Click Transform Data to open Power Query Editor
    Image 27

Step 8: Connecting to Web Data

  1. Open Power BI Desktop
  2. Navigate to Home → Get Data → Web
    Image 28
  3. Enter the URL of the web page or API
    Image 29
  4. Click OK
  5. Select the data table or structure detected
  6. Click Load or Transform Data
    Image 30

Step 9: Connecting to Azure Analysis Services

  1. Open Power BI Desktop
  2. Navigate to Home → Get Data → Azure → Azure Analysis Services
    Image 31
  3. Enter the server name
  4. Select the database/model
  5. Choose connection mode (Live connection recommended)
  6. Click Connect
    Image 32

Conclusion

Integrating data from multiple sources in Microsoft Power BI is a foundational skill for modern data analysts. By understanding the architecture and following a structured approach, you can transform fragmented datasets into cohesive, insight-driven reports. Ultimately, great analytics begins with great data and great data begins with how well you connect, prepare, understand and use it to make business decisions.
Mastering tools like Power Query and applying best practices in data modeling will significantly enhance the quality and performance of your analytics solutions.

Dark Dish Lab: A Cursed Recipe Generator

What I Built

Dark Dish Lab is a tiny, delightfully useless web app that generates cursed food or drink recipes.

You pick:

  • Hated ingredients
  • Flavor chaos (salty / sweet / spicy / sour)

Then it generates a short “recipe” with a horror score, a few steps, and a warning.

It solves no real-world problem. It only creates regret.

Demo

  • YouTube demo

Code

  • GitHub repo

How I Built It

  • Frontend: React (Vite)
    • Ingredient + flavor selection UI
    • Calls backend API and renders the generated result
  • Backend: Spring Boot (Java 17)
    • POST /api/generate endpoint
    • Generates a short recipe text and returns JSON
  • Optional AI: Google Gemini API
    • If AI is enabled and a key is provided, it asks Gemini for a very short recipe format
    • If AI is disabled or fails, it falls back to a non-AI generator
  • Notes
    • Only Unicode emojis are used (no emoji image assets)
    • API keys are kept in local env files and not committed

How I Leveraged Google AI

Gemini (via the Gemini API) is the text generator behind Dark Dish Lab’s “cursed recipe” output.

Instead of using AI as a generic chatbot, I use it as a formatting engine:

  • Input: selected ingredients + flavor axes (salty/sweet/spicy/sour)
  • Output: a short, structured recipe (name, 3 steps, 1 warning)

To keep the experience stable and demo-friendly, I added a few guardrails:

  • Server-side API key only (kept in backend/.env.local, not committed)
  • Strict prompt constraints (no emojis, short length, fixed format)
  • Fallback generator when Gemini is disabled/unavailable
  • Output trimming to avoid unexpectedly long responses

Prize Category

  • Best Use of Google AI — I integrated the Gemini API to generate short, structured cursed recipes from the user’s selected ingredients and flavor axes.
  • Community Favorite angle — Try to generate the most cursed combo you can. Comment your ingredient + flavor picks and the horror score you got. Bonus points if your friends refuse to read it.

Why Developer Productivity Engineering is Underrated

This article was originally published on igorvoloc.com
I write about Developer Productivity Engineering — DX, AI-assisted dev, legacy migration, and business impact from the trenches.

I tracked my own time for two weeks, logging each hour. The numbers were discouraging enough that I stopped. Most of what I called ‘development time’ was actually spent waiting, searching, context-switching, rerunning flaky tests, or digging through outdated documentation and message threads.

This isn’t unique to me. It’s common, and most teams have come to accept it as normal. This invisible waste is what Developer Productivity Engineering (DPE) aims to eliminate.

There’s a deeper reason these problems persist: the people with the power to fund solutions almost never hear about them. Developers spot the slow builds, flaky pipelines, and broken docs immediately — but that pain rarely travels up the chain. Most engineers simply work around it, patching things locally or absorbing the friction as part of the job. By the time leadership hears about the issues, it’s usually reached crisis level.

The Waste Is Real

Developers lose hours each week to friction instead of building a product.1 58% report losing five or more hours weekly to inefficiencies like slow builds, broken tooling, and knowledge gaps—a number I’ve observed almost everywhere, and which matches the data.

62% of developers cite technical debt as their top frustration, while over 10% work at companies lacking CI/CD, automated testing, or DevOps.2

61% of developers spend at least 30 minutes per day searching for information.3 Over a third are blocked by knowledge silos, and a similar share re-answer questions they’ve already solved.3 People call it a documentation problem, but it’s also an architecture and onboarding problem. Knowledge and culture get little attention, do the most damage, and rarely show up on dashboards.

On a team of 50 developers with an average salary of $150K (based on US developer salary benchmarks), 30 minutes lost per person each day adds up to nearly $500K per year. That’s the equivalent of three full-time engineers lost to inefficiency.

The waste is the problem DPE aims to solve. Most organizations under-invest in it.

Nobody’s Fixing Developer Productivity — Here’s Why

Your sales team gets a CRM. Nobody asks them to manage deals in a spreadsheet. Finance gets proper accounting software. Nobody questions the ROI on that.

Developers, often among the highest-paid and hardest to replace, must work around friction. They deal with broken CI pipelines, undocumented services, and spend time searching for information that should be readily available.

Here’s the part most managers don’t see: staff and senior engineers on your team are already doing DPE work. They just don’t call it that.

They notice the build is slow and spend a Saturday morning tracking down why. They write the runbook nobody asked for. They push for the migration nobody wants to focus on.

This is DPE work: invisible, uncredited, and taking time away from what teams actually measure. I’ve watched good engineers burn out this way — doing the most important work in the company while getting zero credit for it.

Half of tech managers say their companies measure productivity or DevEx, but only 16% have anyone dedicated to improving them.4

66% of developers say productivity metrics miss their real contributions.5 Most rely on crude measures: lines of code, story points, commit frequency. Engineers can game these in a month. Everyone knows the numbers aren’t real. And 60% of engineering orgs follow no specific measurement framework,1 though 90% rate improving productivity as a top initiative (8.2/10 on average).1

DORA’s 2025 research, surveying nearly 5,000 professionals, confirms the pattern: only 40% of teams excel at both speed and stability.6 The rest are held back by process friction and foundational gaps — not trade-offs. The “speed vs. stability” compromise is a myth: the best teams do both.

The core issue is that measuring is easy, but fixing is hard. If you track but never fix, you’re just surveilling. Trust drops, and nothing improves.

What Developer Productivity Engineering Actually Is

DPE is broader than developer experience. DX is one part, but not the whole. DPE treats the developer ecosystem as a system with four interconnected pillars:

  • Developer Experience — feedback loops, cognitive demand, flow state
  • Engineering Infrastructure — build systems, CI/CD, test automation, platform tooling, legacy migration
  • Knowledge and Culture — how information flows, gets captured, and outlasts turnover
  • AI Augmentation — how AI tools sit on top of and multiply whatever the other three produce

If one pillar is weak, the others can’t compensate. Developer experience is where most of that friction shows up first — and the most useful way to think about DX focuses on three dimensions that include most of what matters: feedback loops, cognitive demand, and flow state.7

  • Feedback loops are the most obvious. How long from git push to “did this work?” If the answer is 20 minutes, you’ve already lost the thread.
  • Cognitive demand is trickier. It’s not only about whether the code is hard; it’s about how much you need to remember to make a safe change. Every undocumented service boundary and each hidden dependency adds a hidden load. You only notice it when a new hire takes three months to become productive.
  • Flow state is the 2-3-hour window when someone is actually solving a hard problem. How often does the environment protect it?

The biggest influence on DX isn’t tooling — it’s cultural and organizational factors: how product management works, how decisions get made, and how teams communicate.7 Tooling matters, but the org sets the ceiling. This is why so many “DX improvement” initiatives fail: they buy tools without addressing the environments in which those tools operate.

Legacy systems are a drag on every engineer who touches them. Patterns like strangler fig and incremental migration work, but only if someone gives them priority. In most organizations, that person is a staff engineer making the case to leadership who don’t feel the friction directly.

And then there’s AI, which builds on top of all three pillars. According to DORA’s survey of nearly 5,000 professionals, over 80% of developers say AI has increased their productivity.6 About a third of new code in surveyed organizations now comes from AI.8

But AI only multiplies the system you already have. If your DX is good, AI tools make it even better. If your DX is broken, AI just makes things break faster.

The DORA 2025 report — the most comprehensive study of AI in software development to date — calls AI “an amplifier.”6 It magnifies the strengths of high-performing organizations and the dysfunctions of struggling ones. At Adidas, teams with loosely coupled architectures and fast feedback loops saw 20–30% productivity gains from AI and a 50% increase in “Happy Time.” Teams with slower feedback loops due to tight coupling saw little to no benefit.6

The Fix Exists — And It Compounds

The ROI Is Real

Intervention Result Source
DX Core 4 framework 3-12% increase in engineering efficiency DX (Core 4) 9
DX Core 4 framework 14% increase in R&D time on feature development DX (Core 4) 9
Top-quartile DXI 43% higher employee engagement DX (DXI) 10
AI + fast feedback loops (Adidas) 20-30% productivity gains, 50% more “Happy Time” DORA 2025 6
AI + developer training (Booking.com) Up to 30% increase in merge requests DORA 2025 6

A single point increase on the Developer Experience Index saves 13 minutes per developer per week. On a 200-person team, a 5-point improvement adds up to 11,000 developer-hours per year. That’s equivalent to five full-time engineers without additional hiring.

DPE interventions compound over time. Better DX means less time lost to tools and faster delivery. It also improves customer satisfaction — and this is where the numbers get serious.

24.5% of developers are happy at work. 47.1% are complacent. 28.4% are actively unhappy.11 The top satisfaction factor is autonomy and trust.

DPE maps directly onto what keeps engineers engaged — autonomy through better tooling, trust when you eliminate surveillance metrics, and quality because the environment actually supports the work. On a 50-person team, preventing even one or two senior departures per year can pay for a DPE initiative on its own.

Where to Actually Start

You don’t need a large initiative. One person with focus and about 90 days can make a difference. There’s no three-step plan — every organization is different.

The main mistake is treating DPE as a one-off initiative. It’s a practice, like security reviews or performance testing, that needs persistent attention.

But you still need a starting point. Identify the top three pain points by asking, not guessing. A short survey or a few brief conversations across teams is usually enough. The answers often differ from leadership’s assumptions, and the fixes are usually smaller than expected.

Fix one issue and measure the result. Problems are normal: migrations take longer, process changes meet resistance, or metrics don’t move as expected. Use early wins to gain credibility for the next improvement. Bring data.

If you’re a staff or senior engineer who struggles to get leadership’s attention, the data here is for you. You’ve likely been doing DPE work already, even if it’s invisible. Now you have a framework and a name for it.

What’s the one thing on your team that makes you close your laptop and walk away? I’m genuinely curious.

References

  1. Cortex. (2024). State of Developer Productivity Report ↩

  2. Stack Overflow. (2024). Developer Survey 2024 — Most Common Frustrations  ↩

  3. Stack Overflow. (2024). Developer Survey 2024 — Time Spent Searching for Information ↩

  4. JetBrains. (2024). Developer Ecosystem Survey 2024 ↩

  5. JetBrains. (2025). Developer Ecosystem Survey 2025 ↩

  6. DORA. (2025). State of AI-Assisted Software Development ↩

  7. Noda, A., Storey, M.-A., Forsgren, N., Greiler, M. (2023). DevEx: What Actually Drives Productivity ↩

  8. GitLab. (2026). Global DevSecOps Report ↩

  9. DX. (n.d.). Measuring Developer Productivity with the DX Core 4 ↩

  10. DX. (n.d.). The One Number You Need to Increase ROI Per Engineer ↩

  11. Stack Overflow. (2025). Developer Survey 2025  ↩

Matrices in Python

To solve compute problems we use different type of data structures like array , maps, etc. One such data structure is matrices or 2D arrays. Matrices act as foundation for other data structure like graphs .

Define matrices in python

matrix = [
   [1, 2, 3],
   [4, 5, 6],
   [7, 8, 9]
]

Define a 3×3 matrix in python, row(R) equals 3 and column(C) equals 3

matrix_3x3 = [[0]* 3 for in range(3)]

There are popular problems in matrices:

Transpose a matrix:

def calculate_transpose_of_matrix(mat: list[list]) -> list[list]:
    N = len(mat)
    for i in range(N):
       for j in range(i+1, N):
           mat[i][j], mat[j][i] = mat[j][i], mat[i][j]

   return mat

Print matrix in spiral pattern:

def print_matrix_in_spiral(mat: list[list]) -> list[list]:
    R = len(mat)
    C = len(mat[0])

    top = 0
    left = 0
    bottom = R - 1
    right = C - 1

    while top <= bottom and left <= right:
        for i in range(left, (right + 1)):
            print(mat[top][i], end=" ")
        top += 1
        for i in range(top, (bottom + 1)):
            print(mat[i][right], end=" ")
        right -= 1
        if top <= bottom:
            for i in range(right, (left - 1), -1):
                print(mat[bottom][i], end=" ")
            bottom -= 1
        if left <= right:
            for i in range(bottom, (top - 1), -1):
                print(mat[i][left], end= " ")
            left += 1