When a DSL is worth the cost: Lessons from quantum computing
Domain-specific languages (DSLs) often divide engineering teams. When they work, they make complex systems easier to reason about. When they fail, they become costly internal tools that no one maintains. The real challenge is not how to build a DSL, but when building one is justified.
That question will sit at the centre of “Designing Quantum Computing DSLs with Eclipse Xtext”, a session at OCX by Matteo Di Pirro (senior software engineer) and Nicola La Gloria (co-founder and CTO) at Kynetics and contributors to Eclipse Hara.
Today, most quantum programming tools are exposed as APIs for general-purpose languages such as Java or Python. While this lowers the entry barrier, it also forces developers to model quantum concepts through host-language abstractions. As Matteo explains in an interview, “the syntax that we have is complex. Writing quantum algorithms is still something that we need to learn how to do properly.”
He also mentions that quantum programming today is still closer to low-level work: “quantum code feels like assembly code, in the very beginning of computer programming.” Wrapping that complexity in a library does not automatically make it easier to understand.
A language-first approach using Eclipse Xtext offers a different path. Instead of exposing every technical detail, a DSL can focus directly on domain concepts such as qubits, initialisation, and interaction. The goal is intentional simplification: removing syntax that distracts from the problem being solved. In this sense, a DSL is not about adding expressive power, but about reducing cognitive overhead when working in a complex domain.
Importantly, Matteo is explicit about the limits of this approach. We can imagine semantic validation as a type system in a classical programming language: both can rule out certain classes of errors early, but neither can guarantee correctness in all cases. As he explains, “there are classes of errors that we cannot catch early on using a DSL, using semantic validation. Similarly, there are some errors that we cannot catch using type systems. These classes of errors won’t be caught statically”
Quantum software further complicates this picture by introducing entirely new classes of failure that depend on runtime conditions or the characteristics of the target hardware. Acknowledging those limits, rather than hiding them, is part of designing trustworthy languages.
Beyond tooling, this OCX session will address why many DSLs fail in practice. Technical quality alone is not enough. Without an active community using and evolving the language, even well-designed DSLs stagnate. Requirements must come first, and languages should start small, evolving only as the domain becomes better understood.
At OCX, Matteo and Nicola will also deliver a second joint session, “Navigate the Complexity of Deploying AI at the Edge on Embedded Systems”, exploring similar modelling challenges in embedded and edge environments.
Attendees will leave with a clearer framework for deciding when a DSL is worth building, when an API is enough, and where language design genuinely reduces complexity. Through concrete examples from quantum computing and beyond, the session focuses on trade-offs, limits, and evolution.
If you are designing tools for complex domains or questioning whether a DSL is worth the cost, this session is best experienced live. Join Matteo Di Pirro and Nicola La Gloria in Brussels at the Open Community Experience 2026 to explore these decisions in depth and see how language engineering applies in real-world systems.
I am a software engineer, so most of my day is in front of a screen. Things respond quickly there. Code compiles or it doesn’t, a message gets a reply, something either works or it breaks. I got used to that without really noticing I had.
I had always liked the idea of gardening, just never done it. At some point the time and situation felt right and I started. Also wanted to do something that had nothing to do with screens or electronics. Soil, water, sunlight. That was the appeal.
I water a plant and walk away not knowing if it did anything. The soil looks the same the next morning, the leaves don’t visibly change, and if something is happening it’s somewhere I can’t see. Weeks went by where I couldn’t really tell if I was helping or just showing up.
Then one morning a leaf had an odd color, slightly yellow at the edges and curling inward. I took a photo and sent it to ChatGPT out of curiosity. It gave me a confident answer. I read it, tried to understand what the plant might need, and moved on. Somewhere along the way I realized I had started paying more attention to the plant itself rather than looking for someone to tell me what was wrong with it.
The basics turned out to be most of it. Good soil, mostly. Something to hold moisture, something to let it drain, something to feed the roots. Soil, cocopeat & compost. The balance matters more than the ingredients. And water only when the soil asks for it, not before.
One plant died and I still don’t know why. By the time something was clearly wrong it was already too late. There was no obvious moment I could point to.
The others came up slowly. Quietly, in their own order, at their own pace. It was nice to watch.
Some plants have looked exactly the same for weeks now. Same height, same leaves, no visible change. I notice it, but it doesn’t pull at me the way it used to. I water, I check the soil, I watch. Most mornings that’s enough.
If you use Claude Code, you’ve probably added development standards to your CLAUDE.md file. Something like “always fix root causes” or “no half solutions.”
And you’ve probably noticed Claude ignoring them mid-session.
This isn’t a bug in your instructions. It’s a documented problem with how CLAUDE.md content gets injected.
The CLAUDE.md Disclaimer Problem
When Claude Code loads your CLAUDE.md, it wraps the content in a framing that tells Claude it “may or may not be relevant” and should only be followed “if highly relevant to your task.”
This is documented across multiple GitHub issues:
#22309 — CLAUDE.md content wrapped in dismissive framing
#21119 — Claude treats explicit instructions as suggestions
#7777, #15443, #21385 — Various reports of Claude ignoring CLAUDE.md rules
The practical effect: as your conversation grows and Claude’s context fills with code, tool output, and discussion, your carefully written standards get deprioritized. And when the context window fills up and gets compacted, your CLAUDE.md values get summarized away with everything else.
A Better Approach: Hook-Based Reinforcement
Claude Code hooks are event-driven scripts that fire at specific moments in the session lifecycle. Unlike CLAUDE.md content, hook output arrives as clean system-reminder messages — no disclaimer, no “may or may not be relevant” framing.
I built a plugin called Claude Core Values that uses a three-layer reinforcement strategy:
Layer
Hook Event
What It Does
Full injection
SessionStart
Injects all your values at session start — and re-injects them fresh after every compaction
Per-prompt reminder
UserPromptSubmit
Reinforces your motto on every single prompt
No disclaimer
Both
Hook output has no “may or may not be relevant” framing
The SessionStart hook fires on startup, resume, clear, and crucially compact — meaning your values are automatically restored every time context gets compressed.
The UserPromptSubmit hook adds a single-line motto reminder (~15 tokens) on every prompt. Over a 50-turn session, that’s ~750 tokens — negligible against a 200k context window.
## Core Values & Development Standards
**Excellence is not negotiable. Quality over speed.**
### Quality Commitment
- **No Half Solutions**: Always fix everything until it's 100% functional.
- **No Corner Cutting**: Do the real work until completion.
...
On every subsequent prompt, it sees:
Core values reminder: Excellence is not negotiable. Quality over speed.
Both without the disclaimer. Both surviving compaction.
Installation
The plugin installs directly from GitHub — no cloning needed:
Quality-obsessed. No half solutions. No shortcuts.
Startup
Ship fast, iterate rapidly, pragmatic quality.
Security-First
Defense in depth, zero trust, OWASP compliance.
Minimal
Simple baseline: working code, follow patterns, test before push.
The config gets saved to ~/.claude/core-values.yml (global) or .claude/core-values.yml (per-project). Edit the YAML directly anytime — changes take effect on the next session.
Beyond CLAUDE.md
The plugin also solves practical management problems:
Team distribution: plugin install gives everyone identical standards instead of “copy these 30 lines into your CLAUDE.md.”
Per-project overrides: Drop a different core-values.yml in any project’s .claude/ directory without touching global config.
Structured config: YAML with typed sections is easier to diff and review than freeform markdown.
Starter templates: Pick a philosophy and go, instead of staring at a blank file.
The Takeaway
CLAUDE.md is great for project-specific context — file conventions, architecture notes, gotchas. But for development standards you want Claude to follow every time, in every session, no exceptions — hook-based injection is the more reliable path.
The plugin is open source and MIT licensed: github.com/albertnahas/claude-core-values
Requirements: Claude Code, Python 3 (ships with macOS/Linux). PyYAML is optional — the plugin includes a zero-dependency fallback parser.
They work in the moment. They fail six (actually one) months later.
Commit messages are not for today or for something special. They are for:
your future self
your teammates
automation pipelines
changelog generators
semantic versioning
Writing structured commit messages takes discipline. Under pressure, discipline disappears. Even without the pressure sometimes, just because as developers, we are lazy by nature.
This is where GitHub Copilot becomes interesting.
Instead of using Copilot just for code, you can use it as a commit quality guardrail. With the right instructions, it generates strict Conventional Commit messages automatically, directly inside:
VS Code
JetBrains Rider
Why Conventional Commits Still Matter
Conventional Commits are not about style.
They are about structure.
The format is simple:
type(scope): description
Example:
feat(auth): add JWT token validation
fix(api): handle null response in user endpoint
refactor(ui): simplify navbar component
Readable Git History
Compare:
update stuff
fix bug
changes
With:
fix(auth): prevent null pointer on login
feat(api): add user filtering by role
ci(github-actions): add build cache
The second history is self-documenting.
Using GitHub Copilot for Commit Messages in VS Code
Copilot can generate commit messages directly from the Source Control panel.
By default, generation is generic.
Custom instructions make it strict.
Add this to your settings.json:
"github.copilot.chat.commitMessageGeneration.instructions":[{"text":"Follow Conventional Commits: type(scope): description."},{"text":"Use lowercase type and scope."},{"text":"Use imperative mood: 'add', 'fix', 'update', not past tense."},{"text":"Keep subject under 50 characters. No period."},{"text":"Describe the intent clearly. Avoid vague messages like 'update code'."},{"text":"Use only these types: feat, fix, docs, style, refactor, perf, test, chore, ci."},{"text":"Include a scope when the change targets a specific area."},{"text":"Ensure each commit represents one logical change."},{"text":"Add a body when needed, separated by a blank line."},{"text":"Use bullet points (*) in the body for multiple changes."},{"text":"Explain why the change was made, not only what changed."},{"text":"Add BREAKING CHANGE: in the footer when applicable."}]
After staging changes, generate the commit message.
Instead of:
update login logic
You’ll get:
fix(auth): prevent null pointer on login
* add null check for user object
* improve error handling for invalid credentials
Copilot becomes consistent because the rules are explicit.
Using GitHub Copilot for Commit Messages in JetBrains Rider
In Rider, Copilot integrates in the Commit tool window.
With strict instructions, you can enforce full compliance.
Example instruction structure:
Follow the Conventional Commits specification strictly.
<type>(<scope>): <short description>
All sections except the first line are optional.
Use only these types: feat, fix, docs, style, refactor, perf, test, chore, ci.
Max 50 characters. Imperative mood. No period.
Each commit must represent one logical change.
Use BREAKING CHANGE: footer when applicable.
Understanding Django’s Architecture Beyond the File Structure
When developers first encounter Django, the framework feels clean, powerful, and “batteries-included.” But after the initial excitement, confusion starts.
Why?
Because most tutorials explain what the files are, not why they exist.
This article breaks down Django’s structure from an architectural perspective — not just folder-by-folder explanation, but system-level understanding.
Why Django Structure Confuses Juniors
Most juniors approach Django like this:
“I created a project. I created an app. Now I put logic somewhere and it works.”
The confusion happens because:
The distinction between project and app isn’t conceptually clear.
MTV is introduced superficially.
Business logic placement isn’t discussed.
The request lifecycle remains invisible.
Django’s “magic” hides architectural flow.
Without understanding the architecture, Django feels like controlled chaos.
With understanding, it becomes predictable and powerful.
Project vs App — The Most Misunderstood Concept
Let’s clarify this precisely.
The Django Project
A project is the configuration container.
It defines:
Global settings
Installed apps
Middleware
Root URL configuration
ASGI/WSGI entrypoints
It does not contain business logic.
Think of the project as:
The runtime configuration and environment boundary.
The Django App
An app is a modular unit of functionality.
It represents a domain boundary.
❌ Bad modular thinking
users_app/
orders_app/
payments_app/
✅ Better domain-oriented thinking
accounts/
billing/
analytics/
core/
Each app should encapsulate:
Models
Views
URLs
Domain logic related to that specific area
The app is not just a folder — it is a cohesive domain module.
MTV Properly Explained
Django follows the MTV pattern:
Model
Template
View
This is often compared to MVC, but they are not identical.
Conceptual Mapping
Django
Classical MVC
Model
Model
Template
View
View
Controller
Model
Represents data structure and database interaction.
Defines schema
Handles ORM logic
Encapsulates data behavior
Models should contain domain rules when appropriate.
View
Despite the name, Django’s View acts more like a controller.
It:
Accepts requests
Orchestrates logic
Interacts with models
Returns responses
It should not:
Contain heavy business logic
Contain large data transformations
Become a dumping ground
Template
Responsible for presentation.
In API-based systems (e.g., Django REST Framework), templates are often replaced by serializers and JSON responses.
The Django Request Lifecycle (What Actually Happens)
Understanding this is critical.
When a request hits your server:
Client sends HTTP request.
Web server forwards request to Django (via WSGI or ASGI).
Middleware processes the request.
URL resolver matches the path.
Corresponding view is executed.
View interacts with models / services.
Response object is created.
Middleware processes response.
Response is returned to client.
The important insight:
Every request goes through a predictable pipeline.
Django is not magic — it is structured orchestration.
Modular Design Advice (How to Think Like a Mid-Level Developer)
As projects grow, default Django structure becomes insufficient.
In the era of AI, theoretical knowledge is more important than ever. Recently, while solving the Reverse Integer problem, I realized the real challenge wasn’t reversing digits but understanding 32-bit overflow and memory limits. Python hides overflow with dynamic integers, but low-level constraints still matter. AI can generate working code instantly, yet without knowing concepts like time complexity, integer ranges, or the Euclidean algorithm, it’s hard to judge correctness. Theory builds intuition and clarity. It helps you detect hidden constraints and avoid blind trust in generated solutions. AI is powerful, but fundamentals are what make you truly independent and confident.
AI tools have become a core part of modern software development. Developers rely on them throughout the life cycle, from writing and refactoring code to testing, documentation, and analysis.
Once experimental add-ons, these tools now function as everyday assistants and are firmly embedded in routine workflows. But why have AI tools become so essential – and how are developers actually using them?
The insights in this article draw on findings from the JetBrains State of Developer Ecosystem Report 2025, which tracks how developers use tools, languages, and technologies, including AI tools, in real-world environments. Shifting the focus from technical model performance, this article looks at usage patterns, developer preferences, and adoption trends across tools, regions, and workflows.
Before we work through which AI tools developers use most, why they choose them, and how these tools fit into everyday work, let’s first clarify what AI tools are and why they matter so much right now.
Disclaimer: Please note that the findings in the article reflect data collected during the specific research period set out in the report.
Table of Contents
· What AI tools are and why they matter now
· Most popular AI tools among developers
· What makes developers choose one AI tool over another
· How developers use AI tools in daily workflows
· Global snapshot: How AI tool adoption differs across regions
· Barriers to adopting AI tools
· Future of AI tools: What developers want next
· FAQ
· Conclusion
What AI tools are and why they matter now
Today’s AI tools for developers span several categories. They include code assistants that suggest or generate code, as well as tools that review code autonomously. Many come as IDE integrations that understand project context.
There are also AI-powered search and navigation tools, refactoring helpers, and documentation generators. In addition, teams now use testing assistants and autonomous or semi-autonomous agents to support more complex workflows.
Understanding today’s AI tools list for developers matters because these tools directly address growing pressures in modern development. They shorten development cycles, reduce manual tasks, and help teams maintain quality, which is especially important as codebases grow.
This growing reliance makes it important to understand which tools developers actually use most. In the next section, we will see what these AI tools are.
Most popular AI tools among developers
Developers rarely rely on a single AI tool. Instead, they combine multiple tools depending on their IDE, workflow style, and project requirements. According to the AI usage insights in the the JetBrains State of Developer Ecosystem Report 2025, adoption clusters around three main categories: IDE-native assistants, standalone AI-powered development environments, and browser-based or cloud chat tools.
Across these categories, the most popular AI assistants are GitHub Copilot, JetBrains AI Assistant, Cursor, Windsurf, and Tabnine. Adoption of these top AI tools varies based on ecosystem, IDE choice, and workflow style.
IDE-native assistants, such as GitHub Copilot and JetBrains AI Assistant, remain among the most popular AI tools because they operate inside the editor and integrate directly into existing workflows, making them more context-aware.
Standalone AI-focused editors and assistants, such as Cursor and Windsurf, often emphasise more experimental or agent-style workflows. This is an area that is evolving across the ecosystem, with increasing convergence between IDE-native tools and more agent-driven capabilities.
Other tools focus on specific priorities. For example, Tabnine attracts teams that prioritize privacy and local inference. Region-specific tools also play an important role in areas with strong domestic AI ecosystems or regulatory constraints.
This diversity becomes clearer when comparing the best AI tools for developers side by side.
Comparison table: AI tools overview
AI tool
Typical use case
Underlying models
Distinct features
Integration type
GitHub Copilot
Code generation and completion
GPT family
Tight GitHub + VS Code workflows
IDE / Cloud
JetBrains AI Assistant
Context-aware help, refactoring
Claude / GPT / Gemini
Deep IDE context + privacy focus
In-IDE
Cursor
Inline edits, debugging, chat
Claude / Gemini
Fast UI, multi-step edits
IDE plugin
Windsurf
Autonomous task execution and code changes
Claude / GPT
Agent-like capabilities
Standalone
Tabnine
Privacy-oriented code suggestions
Proprietary / DeepSeek
Local inference options
IDE plugin
Disclaimer: Please note that the findings reflect data collected during the specific research period set out in the report.
What makes developers choose one AI tool over another
Developers are not choosing AI tools solely on novelty. They evaluate how well a tool fits existing workflows, how reliable the output feels, and whether the tool aligns with team constraints. The JetBrains State of Developer Ecosystem Report 2025 identifies several of these practical considerations that shape decision-making.
Integration quality ranks among the most important factors. Developers prefer AI coding tools that work seamlessly inside their preferred IDE. A tool that interrupts flow or requires constant context switching often fails to gain long-term adoption.
Accuracy and code quality are equally crucial. Developers expect AI coding tools to produce reliable results that they can trust. When outputs require extensive correction, confidence drops quickly.
Privacy and data security also influence developer AI preferences. This is especially true in enterprise environments. Tools that offer local processing or clear privacy guarantees often see stronger uptake in regulated industries.
Finally, pricing, transparency, and vendor reputation affect adoption. Developers value clear pricing models, flexible access, and vendors with a track record of supporting developer tools. Trust builds over time through consistency and ongoing communication.
Let’s see how developers evaluate each of these factors in this AI assistant comparison.
Key factors influencing tool choice
Factor
Why it matters
How developers evaluate it
IDE integration
Supports smooth workflows
Works natively in their preferred IDE
Code accuracy and quality
Affects trust and usability
Produces correct, clear, and maintainable code
Privacy and security
Protects source code and IP
Provides clear data handling and local mode options
Pricing and access
Impacts adoption at scale
Offers flexible tiers and predictable costs
Transparency
Builds confidence
Discloses model provider and data policies
Vendor reputation
Signals long-term reliability
Demonstrates a history of dev tools and quality support
Disclaimer: please note that the findings reflect data collected during the specific research period set out in the report.
How developers use AI tools in daily workflows
Developers integrate AI tool usage throughout the development life cycle rather than limiting it to a single task. Most workflows combine several forms of AI access depending on the problem at hand.
When coding with AI tools, developers may use in-IDE assistants for context-aware code help and chat-based interfaces for problem-solving and prototyping. In addition, developer AI assistant usage may combine browser tools for quick inline answers, APIs for automation and CI/CD tasks, and local models for privacy-restricted environments.
Across these use cases, developers are clearly no longer relying on a single tool. AI workflows increasingly involve choosing the right tool for the task at hand, be it writing code, refactoring, debugging, generating documentation, testing, or understanding unfamiliar code.
The JetBrains State of Developer Ecosystem Report 2025 indicates that developers frequently switch between AI access points in this way. They choose the interface that best fits the task rather than expecting one tool to handle everything.
Workflow types and examples
Workflow type
Typical use case
Example tools
Integration context
Developer benefit
In-IDE assistance
Code suggestions, refactoring
JetBrains AI Assistant, GitHub Copilot
IDE
Immediate, context-aware help
Chat-based interaction
Explanations, brainstorming, regex, prototyping
ChatGPT, Claude
Browser / Cloud
Fast iteration and reasoning
API integration
Automation, CI tasks, documentation
OpenAI API, Anthropic API
Backend / DevOps
Scalable automation
Browser extensions
Quick inline code insights
Codeium, AIX
Web
Lightweight access
Local/private models
Secure, offline coding
Tabnine, DeepSeek (self-hosted models)
On-premises / Enterprise
High privacy and control
Disclaimer: please note that the findings reflect data collected during the specific research period set out in the report.
With AI firmly established in daily workflows, the next section looks at regional differences in AI tool adoption.
Global snapshot: How AI tool adoption differs across regions
Global AI adoption patterns do not look the same everywhere. Regional ecosystems, regulations, and developer communities shape which tools gain traction. The JetBrains State of Developer Ecosystem Report 2025 highlights clear regional AI trends.
In North America, developers commonly adopt mainstream tools such as GitHub Copilot, JetBrains AI Assistant, and Claude-based assistants. Strong cloud infrastructure and rapid LLM innovation encourage experimentation with multiple tools.
European developers balance adoption with privacy considerations. Data residency and compliance requirements influence tool selection, leading to broader interest in solutions that offer transparency and local processing options.
In the Asia-Pacific region, developers often combine global tools with regional offerings. Mobile-first development cultures and fast-growing ecosystems drive rapid experimentation, particularly with cloud-based assistants.
Mainland China stands out due to its strong domestic AI ecosystem. Developers there frequently rely on local tools and models such as DeepSeek, Qwen, and Hunyuan, which align better with infrastructure and regulatory realities.
Regional highlights and local leaders
Region
Most used tools
Local ecosystem drivers
Notable observations
North America
GitHub Copilot, JetBrains AI Assistant, Claude
Strong cloud and LLM innovation
High multi-tool adoption
Europe
JetBrains AI Assistant, GitHub Copilot
Privacy regulations, data residency
Balanced adoption across tools
Asia-Pacific
GitHub Copilot, Gemini
Mobile/cloud-first development cultures
Rapid experimentation and growth
Mainland China
DeepSeek, Qwen, Hunyuan
Strong domestic AI ecosystem
Preference for locally hosted models
Disclaimer: please note that the findings reflect data collected during the specific research period set out in the report.
While AI tool usage worldwide is undoubtedly gaining momentum, barriers to AI adoption also exist, which we explore in the next section.
Barriers to adopting AI tools
Despite growing interest, not all developers or teams adopt AI tools easily. The JetBrains State of Developer Ecosystem Report 2025 shows that such AI adoption challenges often stem from uncertainty rather than opposition.
Privacy and security concerns remain the most common AI coding tool barriers. Teams worry about exposing sensitive code or intellectual property, especially when tools rely on cloud processing. Without clear guarantees, organizations may restrict or ban usage.
Legal and ownership questions are other reasons why developers avoid AI tools. Developers and managers want clarity about who owns AI-generated code and how licensing applies. Uncertainty leads many teams to limit AI use to non-critical tasks.
Individual barriers matter as well. Some developers lack confidence in using AI tools effectively or struggle to evaluate output quality. Others distrust AI suggestions due to past inaccuracies.
Cost, licensing, and infrastructure constraints can also limit adoption, particularly for larger teams. Per-seat pricing and usage caps further complicate budgeting and rollout decisions.
Obstacles and evaluation criteria
Barrier
Why it matters
Typical impact
Privacy and security concerns
Increases the risk of exposing sensitive code
Usage blocked or restricted
IP and code ownership concerns
Creates legal uncertainty
Hesitation to rely on AI for core code
Lack of knowledge or training
Reduces confidence in using tools
Slower individual adoption
Accuracy and reliability issues
Impacts trust in outputs
More manual review required
Internal policies and processes
Requires compliance and complex approval workflows
Delayed tool rollout
Cost and licensing
Exceeds budget or per-seat limits
Partial or limited deployment
Disclaimer: please note that the findings reflect data collected during the specific research period set out in the report.
In the next section, we move from the barriers of today to developers’ hopes for the future.
Future of AI tools: What developers want next
Developers do not simply want more AI features. They want better ones. The JetBrains State of Developer Ecosystem Report 2025 not only indicates greater adoption but also shows that developers are hopeful about the future of AI. Their expectations focus on reliability, integration depth, and control rather than novelty.
Higher code quality tops developer AI expectations. Developers want fewer hallucinations, cleaner outputs, and suggestions that respect project conventions. Trust grows when AI behaves predictably.
Deeper IDE integration also ranks high. Developers expect future AI tools to understand entire projects, not just individual files. Context retention across sessions and multi-file awareness are increasingly important.
Privacy remains central. Many developers want local or on-device options that allow them to use AI without sharing code externally. Transparent data handling builds confidence.
Pricing clarity and explainability also influence future AI assistant trends. Developers want predictable costs and better insight into why tools suggest certain changes.
But most significantly, as AI tools evolve, developers want support for complex workflows and architecture reasoning. The goalpost is also shifting. Developers now expect future AI tools to move beyond basic autocomplete and act as collaborative partners.
Disclaimer: please note that the findings reflect data collected during the specific research period set out in the report.
The following FAQ addresses some of the most common questions developers ask when evaluating and using AI tools.
FAQ
What are the most popular AI tools among developers today? According to the report’s findings, developers commonly use tools such as GitHub Copilot, JetBrains AI Assistant, Cursor, and Tabnine, often combining them rather than using a single tool.
Are AI tools safe for use with private or proprietary code? Safety depends on the tool. Developers increasingly prefer tools that provide clear privacy policies or local processing options.
Which AI tools work best inside IDEs? IDE-native tools tend to perform best for daily coding tasks because they understand project context and workflows.
Do developers prefer local AI models or cloud-based solutions? Preferences vary. Some developers value cloud flexibility, while others prioritize local models for privacy and compliance.
How do AI tools help with debugging and documentation? They explain code, identify errors, suggest fixes, and generate comments or documentation drafts.
Are AI tools suitable for enterprise teams with strict security requirements? Many are, especially when they offer strong privacy guarantees, administrative controls, and predictable pricing.
Can AI tools speed up development without reducing code quality?
Yes, when developers use them intentionally. AI tools speed up repetitive tasks such as code generation, refactoring, testing, and documentation, while reviews, IDE checks, and automated tests help maintain quality.
Conclusion
AI tools have evolved from optional add-ons into essential components of modern software development. Developers now rely on them for coding, refactoring, documentation, testing, and learning, integrating AI assistance throughout daily workflows.
Current adoption trends show that developers value accuracy, deep integration, and privacy above experimental features. The JetBrains State of Developer Ecosystem Report 2025 reflects broad and growing use across regions, tools, and development styles.
As AI tools continue to evolve, they move toward deeper context awareness, stronger reasoning, and more secure deployment options.
For developers, AI no longer represents a future possibility. It has become a practical, everyday partner in building software.