Now that we’re in 2026, we want to take a moment to reflect on the past year and share what’s ahead. YouTrack continues to grow – we’re seeing more teams than ever making the switch, and we’re deeply grateful for the trust you place in us. Together with our consulting partners, we kicked off a number of migration projects with Fortune 500 customers last year, and even more are lined up for 2026. Your feedback has been the driving force behind the decisions we’ve made, and it continues to shape our roadmap.
This year, our focus is on making YouTrack even more powerful for large, scaling teams – here’s what we’re building.
Our commitment to YouTrack Server and Cloud
Giving you the flexibility to choose the hosting option that best fits your organization and data governance needs has always been our priority – and we intend to keep it that way for years to come. While Atlassian has been pushing Jira and Confluence customers toward cloud-only hosting and ending server support altogether, we’re going in the opposite direction. Technically, we maintain one codebase for both – every feature we build is available to Server and Cloud alike.
Continuous YouTrack Server support
We feel a deep sense of responsibility toward customers who choose YouTrack Server, especially those embarking on large-scale, long-term migration projects. Server hosting isn’t a legacy choice – it’s a valid long-term strategy, and it deserves our full support. Whether you are a growing team or a large enterprise, YouTrack Server will remain available to teams of any size. Behind the scenes, we’re also investing in our database architecture to ensure both hosting options continue to receive equal support and performance at scale.
New European data center for YouTrack Cloud
For YouTrack Cloud customers, we’re providing more flexibility and compliance with European data residency requirements. We are adding Frankfurt, Germany, as a new European data center location hosted by Amazon Web Services (AWS). Starting this February, new YouTrack Cloud instances can be configured with Germany as the selected location. For existing instances, changing the location where your data is stored requires the YouTrack Support team to migrate your instance from one data center to another. If you want to move your instance to a different data center, submit a request to YouTrack Support.
What’s ahead in 2026
Our 2026 roadmap is shaped by the people who use YouTrack every day. For teams migrating from tools like Jira and Confluence, we’re investing in smoother imports, expanding opportunities for app developers, and building the infrastructure to support large-scale transitions. We’re also making the Knowledge Base more collaborative for migrating teams bringing their documentation with them, and for everyone already using YouTrack.
For project managers, this is a big year – we’re introducing Whiteboards as a major new way to plan visually, and we plan to enhance Gantt charts and Agile Boards.
For B2B support teams, we’re expanding YouTrack Helpdesk with client organizations, enabling them to provide tailored experiences for different clients.
We’re also working to integrate AI assistance more deeply into every user’s daily workflow.
For growing and enterprise teams migrating to YouTrack
We work hands-on with large organizations through every step of their migration journey. What we learn from these projects directly shapes our roadmap – and we deliberately keep it flexible so we can quickly address challenges as they arise.
Enhancing the migration experience
We are working to make the migration from other project management tools to YouTrack easier and faster. We are continuously improving our existing import options, with a stronger focus on Jira and Confluence. You can also expect more built-in import options, including a new ClickUp migration wizard.
Creating new opportunities for app developers
Our consulting partners play a key role in supporting enterprise customers through migrations and project customization.
We’re investing in expanding opportunities for app developers – both for partners building solutions for their clients and for independent developers contributing to the JetBrains Marketplace app collection. You can now publish paid apps and build custom apps for AI-powered tools using YouTrack’s remote MCP Server. If you’re interested in working with a partner, contact us and we’ll connect you with the right team.
Improving YouTrack performance
We continue to work on migrating YouTrack to a new database that supports a multi-node environment, improving overall performance and ensuring the scalability and reliability required for large instances.
Сollaborative editing in Knowledge Base
For teams migrating from Confluence, a powerful Knowledge Base is essential. We’re working on collaborative article editing – and you’ll find the full details in the “For everyone” section below.
For Project Managers
Introducing YouTrack Whiteboards
Planning a project often starts with ideas rather than tasks. With YouTrack Whiteboards, we’re introducing a new, flexible way for teams to collaborate from a bird’s-eye view – whether you’re starting from scratch or building on existing project content.
You’ll be able to brainstorm and visualize ideas on a shared whiteboard-style interface in real time. Once notes are ready to be turned into actions, they can be transformed into tasks and articles with preconfigured dependencies, fully integrated into your YouTrack project’s context.
Gantt charts and Agile Boards
We plan to enhance Gantt charts with an improved UI for easier timeline viewing and better performance when making changes. Agile Boards will gain the ability to embed task lists, making it faster to navigate to important tasks directly from the board.
For B2B support teams
Helpdesk with client organizations
We’re expanding YouTrack Helpdesk to better support B2B scenarios. Our next step is introducing client organizations, enabling support teams to provide tailored experiences for different clients. You’ll be able to manage tickets from multiple client organizations at once, while keeping requests clearly separated. All tickets related to a client organization will be visible to its members by default, giving each client full visibility into the status and progress of their requests.
For everyone
Collaborative editing in your Knowledge Base
Collaborative article editing has been one of the most requested features in YouTrack – and one that’s technically challenging to build. We’re looking forward to finally making it happen. You’ll be able to work on content together with your team members in real time, see who else is editing, and make changes simultaneously. We also plan to improve the experience of working with tables by introducing more convenient editing options.
AI assistance
These days, it’s hard to imagine our day-to-day workflows without AI. We’ll continue enhancing the free AI assistance available out of the box in YouTrack, with a focus on contextual capabilities seamlessly integrated into your workflow. Here’s a preview of what we plan to make available soon:
An AI-powered chat that supports you in all aspects of your work. Ask YouTrack via text or voice commands to update tasks, search through articles, manage planning on Whiteboards, and perform many other project-wide actions.
Creating tasks from Slack will become easier with semantic AI, letting you generate task drafts from conversation context with team members or clients.
A new My Work page will provide a personalized view of tasks and articles, organized into widgets to help every user stay focused on what matters most.
Agent-based automation
Delivering AI automation remains our focus. We’ll continue to make the data available in YouTrack ready for consumption and manipulation by AI agents.
For teams that prefer to work on projects from their existing LLM, IDE, or agent platform, we’ll expand the range of predefined actions available via YouTrack’s remote MCP server. We’ll also build custom integrations for popular AI-powered tools. Starting this February, YouTrack will be available as part of n8n, empowering you to automate workflows and connect YouTrack with hundreds of apps and services.
Let’s shape the future of YouTrack together
We’d love to hear from you! Your feedback shapes YouTrack’s future, and we’re always open to ideas, suggestions, and insights. Whether you want to share a feature request, an improvement suggestion, or just your thoughts, get in touch with us by commenting on this blog or using our public project tracker.
Thank you for being a part of the YouTrack community. Together, we’re building a more powerful YouTrack for 2026 and beyond.
Today I learned about the glob operator, creating a library crate, and multiple binary crates
The Glob Operator:
The glob operator (*) lets you import everything from a module at once.
At first, it felt strong. Almost too powerful.
Up until now, Rust has been teaching me to be explicit, to name exactly what I’m bringing into scope. The glob operator relaxes that slightly.
That contrast made me think. Rust gives you convenience, but it also gives you responsibility.
Even as a learner, I can see how using a glob import might make code shorter but potentially less clear. It made me more aware of how imports shape readability.
Creating a Library Crate:
Learning how to create a library crate changed how I view Rust projects.
Before this, everything I built felt like “a program.” Now I see that Rust encourages you to build reusable logic, code meant to be consumed by other parts of the project or even other projects.
A library crate feels like saying: this code is meant to be depended on.
That shift in perspective, from writing code for execution to writing code for reuse, feels like a major step in maturity.
Multiple Binary Crates:
This one surprised me. A single project can have multiple binary crates, meaning multiple entry points.
That made me realize Rust doesn’t assume your project is a single purpose tool. You can structure a project to serve different commands, roles, or execution flows while sharing the same core logic.
It made everything I’ve learned about modules and visibility suddenly make more sense. Structure supports scale.
Some languages can have multiple main packages in a project, but they need to be in separate directories. Rust’s approach with multiple binaries in the same workspace feels similar but more integrated.
At this point, I’m starting to see Rust in layers.
Ownership taught me about memory discipline. Enums taught me about modeling uncertainty. Modules taught me about structure. Visibility taught me about boundaries.
Crates and binaries are teaching me about architecture.
Rust doesn’t just teach me how to code. It teaches me how to design systems.
I’m still learning and still early, I will get a clearer picture soon.
When you’re building projects, do you start thinking about reusability and architecture upfront, or do you refactor toward it once things get complex?
// Using the glob operator (imports everything)
use std::collections::*;
fn example_glob() {
let mut map = HashMap::new();
map.insert("key", "value");
}
// Library crate structure
// src/lib.rs
pub fn greet(name: &str) -> String {
format!("Hello, {}", name)
}
pub fn farewell(name: &str) -> String {
format!("Goodbye, {}", name)
}
// Binary crate that uses the library
// src/main.rs
use my_library::greet;
fn main() {
println!("{}", greet("Rust"));
}
// Additional binary crate
// src/bin/other_tool.rs
fn main() {
println!("This is a separate binary in the same project");
}
// Documentation comment with examples
/// Adds two numbers together.
///
/// # Examples
///
/// ```
/// let sum = my_library::add(2, 3);
/// assert_eq!(sum, 5);
///
The Arm®v8-M architecture introduced a security extension called TrustZone®*, which splits the firmware running on the MCU into two worlds: secure and non-secure. In this blog post, I want to discuss how to work effectively on STM32 projects using this technology. We’ll get you all set up to use the latest and greatest code analysis tools with conventional debugging in CLion.
We’ll use the following setup:
CLion 2026.1 EAP (Early Access Program) on Windows.
STM32 NUCLEO-L552ZE-Q board.
We also need to install:
STM32CubeProgrammer to configure the hardware.
STM32CubeCLT 1.20.0 with the bundled ST-Link _gdbserver and cross-compiler to build and debug the STM32 firmware.
You can find the project we’ll be using as a showcase on GitHub. The initial stub was generated by STM32CubeMX 6.16.0. If you want to follow along, you can use a slightly older version of CLion; the minimal required version is 2025.3.2.
Understanding the STM32 TrustZone-based project structure
The secure TrustZone mode is a privileged one and can serve requests from the unprivileged non-secure mode. Why would we want to use this? The reasoning is similar to why we have a user space in good old desktop computers – we don’t trust some code enough, and we don’t want it interfering with critical tasks. Mission-critical parts go into secure mode, while the processing of some user, wireless, remote, or internet data runs in non-secure mode. Thus, we isolate the important stuff from the exposed interface, such as Wi-Fi or Bluetooth.
Even if there is a vulnerability in the internet-connected code, the important core remains unaffected. For example, even if the device’s non-secure application code is compromised, the secure bootloader allows a new, fixed application to be reflashed remotely and the device to be recovered without physical access.
So, what changes in the project structure compared to the default STM32 CMake project?
Tip: In STM32CubeMX, peripherals can be assigned to secure and non-secure zones, as shown in the image above.
The code generated by STM32CubeMX is actually two independent subprojects wrapped in a superproject that can build both. The root CMake project provides only the plumbing needed to build the two subprojects. It also contains the shared code referenced by both subprojects, such as the hardware drivers and the HAL on top of them. In CMake terms, this is done by the ExternalProject_Add directive. Here is how this superproject structure looks in CLion:
There is one important caveat, though: The subprojects are not configured until the project is actually built. This means that for CMake to report any information CLion uses for code insight, the superproject must be built first. CLion will then automatically gather the necessary information.
Configuring the project in CLion
To start, clone our example repository from GitHub and open the .ioc file as a project. The default editor that opens should greet you with the name of the project’s MCU and an option to open it with STM32CubeMX for reconfiguration.
If you don’t have access to the hardware we’re using, you can alternatively generate your own project with STM32CubeMX. Select the option to generate the project with TrustZone enabled, configure the peripherals you need, and generate for CMake (it does not matter which compiler you choose; we support both GCC and ST-ARM-CLANG).
As I’ve noted above, all we need to do now is build the project, and we’ll get code insight in the subprojects.
If you’re new to CLion, refer to our more thorough documentation to learn how to open an STM32 project or create a new one, configure it, and build it.
Tip: CLion respects the FOLDER property of CMake targets, but this creates unnecessary structure in the project run configurations. You can go to the advanced settings and turn this feature off by disabling Group CMake run configurations by FOLDER property.
Setting up debugging
Unfortunately, the CMake targets corresponding to the secure and non-secure subprojects added to the superproject don’t include information about the compiled files. You will need to enter this manually in the run configuration. Edit both run configurations and select the corresponding binaries as the executables:
We are looking into ways to work around this limitation of CMake’s external projects. (CPP-48380).
Tip: The non-secure target depends on the secure one, so when you build the non-secure one, the secure one is built as well.
Consult the manual for the MCU you are using to learn how to enable and disable TrustZone on your particular hardware. Following the STM introductory tutorial, we used STM32CubeProgrammer to configure the following option bytes on our board: TZEN=1, SECWM2_PSTRT=0x1, and SECWM2_PEND=0x0.
If you’re using our example project or followed our instructions and opened the .ioc file as a project, you already have a debug server set up. If you started from scratch, an ST-LINK debug server should already be pre-selected and pre-configured in Settings | Debugger. It’s designed for a streamlined setup and intended to work in the majority of common cases.
However, today we are discussing a somewhat more complex case, so we need something more powerful, more… generic. If you’re following our example, you will find the Generic debug server already set up in the project – we’ll use that one. Note that you might need to enable Debug Servers in Settings | Advanced Settings | Debugger to see it.
If you started from scratch, convert the ST-LINK debug server to Generic (using the button next to the Name field). This is a much more powerful option that allows full customization. We’ll need to adjust a few things to be able to flash two images, instead of just one, as the default. We are looking into automating this process (CPP-48379).
In the Debugger tab of the generic debug server, go to the Connection section, select the Script | Custom option, and add the following:
# Connect to GDB Server
$GDBTargetCommand$
# Flash NonSecure binary
exec-file NonSecure/build/stm32l5-trustzone_NS.elf
load
# Flash Secure binary and load its symbols
file Secure/build/stm32l5-trustzone_S.elf
load
# Load symbols from NonSecure binary
add-symbol-file NonSecure/build/stm32l5-trustzone_NS.elf
# Reset the MCU
monitor reset
You can find an extended version of this script with logging echoes in our example.
The script connects to your device, uploads both non-secure and secure firmware, loads the necessary debug information, and resets the device.
Tip: The $GDBTargetCommand$ expression is an IDE macro that expands to the connection script as though it were generated in automatic mode. A preview is shown when you have automatic mode selected – for example, target remote tcp:localhost:12345.
Since we’re now performing all these steps manually, let’s disable the automatically added ones so we don’t do the same work twice. In the Device Settings tab, set Upload executable to device to Never and deselect both options for Reset.
Now, if you’ve followed our hardware setup, you should be able to set up a couple of breakpoints and start debugging!
Be careful, though, and don’t get too far ahead of yourself. The number of hardware breakpoints is limited by design (6 on our STM32L5). This means you can run out of breakpoints faster than you might expect, especially when working with shared code. Setting breakpoints in shared code consumes twice as much – the code gets compiled into both images, is flashed twice, and requires a hardware breakpoint for each code copy.
Disabling TrustZone
After following this article, you may need to disable TrustZone mode on your MCU. Here, we briefly summarize instructions from the manual mentioned above. Refer to your device’s manual for instructions on how to proceed in your case.
To disable TrustZone:
Raise the readout protection level to 1 (by setting RDP to DC, 0.5, or 2), which disables the debugging of secure code. Note that setting the protection level to 2 would also disable debugging for non-secure code, which is why we didn’t do this earlier.
Raise the BOOT0 pin to VDD voltage.
Cycle the IPP jumper (disconnect and reconnect it). At this stage, two LEDs should illuminate. After that, you should be able to connect with the STM32CubeProgrammer in Hot-plug mode.
Simultaneously set the readout level to 0 (RDP to AA) and disable TZEN.
If necessary, you can now revert the memory security level for the second flash bank.
What’s next
Have any questions, or did something not work as described? Please leave us a comment, or visit us at Embedded World 2026, Hall 4, booth 146. We look forward to your feedback!
We plan on writing a similar walkthrough for working with dual-core MCUs and MCUs with bootflash or bootrom memory. The project structure in those cases is similar to external CMake projects, but the debugging experience differs.
As mentioned earlier, we’re actively working to improve support for STM32 projects. Any ideas on what would fit your workflow are very welcome, so please file an issue in YouTrack.
* Arm and TrustZone are registered trademarks of Arm Limited (or its subsidiaries or affiliates) in the US and/or elsewhere.
My name is Hasan and I am a software engineer. For a very long time, I have been observing fellow QA engineers suffer from repetitive QA processes and whole teams struggle from deep learning curves of really bloated test managements tools like Xray and Zephyr. I was one of them and I finally built Testream (took me a while though 🙂).
What I wanted to fix:
Too much setup and too many layers to learn
Manual test cases eat away QA efforts to build automation
Poor visibility between test execution and project tracking
Painful release sign offs while collecting test results
Invoices that hit harder than flaky tests on release day 🫢
What Testream offers:
Code-first test management
Test results from actual real test runs (codebase/CI/CD)
Native reporters for popular test frameworks
Free Jira app integration
No per-seat pricing model, open to whole team
Please claim your free API key and give it a go! Would really appreciate your feedback on the product and onboarding. 🙏
SQL(Structured Query Language) is a powerful tool to search through large amounts of data and return specific information for analysis. Learning SQL is crucial for anyone aspiring to be a data analyst, data engineer, or data scientist, and helpful in many other fields such as web development or marketing.
SQL Joins
JOINS in SQL are commands which are used to combine rows from two or more tables, based on a related column between those tables. They are predominantly used when a user is trying to extract data from tables which have one-to-many or many-to-many relationships between them.
There are mainly four types of joins that you need to understand. They are:
(INNER) JOIN
LEFT (OUTER) JOIN
RIGHT (OUTER) JOIN
FULL (OUTER) JOIN
INNER JOIN
INNER JOIN is used to retrieve rows where matching values exist in both tables. It helps in:
Combining records based on a related column.
Returning only matching rows from both tables.
Excluding non-matching data from the result set.
Ensuring accurate data relationships between tables.
Syntax:
SELECT left_table.id, left_table.left_val, right_table.right_val
FROM left_table
INNER JOIN right_table
ON left_table.id = right_table.id;
LEFT JOIN
LEFT JOIN is used to retrieve all rows from the left table and matching rows from the right table. It helps in:
Returning all records from the left table.
Showing matching data from the right table.
Displaying NULL values where no match exists in the right table.
Performing outer joins, also known as LEFT OUTER JOIN.
Syntax:
SELECT left_table.id, left_table.left_val, right_table.right_val
FROM left_table
LEFT JOIN right_table
ON left_table.id = right_table.id;
RIGHT JOIN
RIGHT JOIN is used to retrieve all rows from the right table and the matching rows from the left table. It helps in:
Returning all records from the right-side table.
Showing matching data from the left-side table.
Displaying NULL values where no match exists in the left table.
Performing outer joins, also known as RIGHT OUTER JOIN.
Syntax:
SELECT left_table.id, left_table.left_val, right_table.right_val
FROM left_table
RIGHT JOIN right_tale
ON left_table.id = right_table.id;
FULL JOIN
FULL JOIN is used to combine the results of both LEFT JOIN and RIGHT JOIN. It helps in:
Returning all rows from both tables.
Showing matching records from each table.
Displaying NULL values where no match exists in either table.
Providing complete data from both sides of the join.
Syntax:
SELECT left_table.id, left_table.left_val, right_table.right_val
FROM left_table
FULL JOIN right_tale
ON left_table.id = right_table.id;
Core Insights
SQL joins are fundamental for relational data modeling, enabling the combination of rows from multiple tables based on defined relationships, typically via primary and foreign keys.
Proper join selection directly affects result cardinality, null propagation, and business logic interpretation. Performance considerations include indexing join columns, minimizing unnecessary joins and understanding join order in execution plans.
Key takeaways are that joins operationalize relational integrity, drive multi-table analytics and must be designed carefully to avoid duplication, unintended filtering or performance degradation especially in high-volume transactional or analytical databases.
SQL Window Functions
A window function in SQL is a type of function that performs a calculation across a specific set of rows (the ‘window’ in question), defined by an OVER() clause.
Window functions use values from one or multiple rows to return a value for each row, which makes them different from traditional aggregate functions, which return a single value for multiple rows.
Similar to aggregate function GROUP BY, a window function performs calculations across multiple rows. Unlike aggregate functions, a window function does not group rows into one single row.
Key components of SQL window functions
The syntax for window functions is as follows:
SELECT column_1, column_2, column_3, function()
OVER (PARTITION BY partition_expression ORDER BY order_expression) as output_column_name
FROM table_name
In this syntax:
The SELECT clause defines the columns you want to select from the table_name table.
The function() is the window function you want to use.
The OVER clause defines the partitioning and ordering of rows in the window.
The PARTITION BY clause divides rows into partitions based on the specified partition_expression; if not specified, the result set will be treated as a single partition.
The ORDER BY clause uses the specified order_expression to define the order in which rows will be processed within each partition; if not specified, rows will be processed in an undefined order.
Finally, output_column_name is the name of your output column.
These are the key SQL window function components. One more thing worth mentioning is that window functions are applied after the processing of WHERE, GROUP BY, and HAVING clauses. This means you can use the output of your window functions in subsequent clauses of your queries.
The OVER() clause
The OVER() clause in SQL is essentially the core of window functions. It determines the partitioning and ordering of a rowset before the associated window function is applied.
The OVER() clause can be applied with functions to compute aggregated values such as moving averages, running totals, cumulative aggregates, or top N per group results.
The PARTITION BY clause
The PARTITION BY clause is used to partition the rows of a table into groups. This comes in handy when dealing with large datasets that need to be split into smaller parts, which are easier to manage. PARTITION BY is always used inside the OVER() clause; if it is omitted, the entire table is treated as a single partition.
The ORDER BY clause
The ORDER BY determines the order of rows within a partition; if it is omitted, the order is undefined.
For instance, when it comes to ranking functions, ORDER BY specifies the order in which ranks are assigned to rows.
Frame Specification
In the same OVER() clause, you can specify the upper and lower bounds of a window frame using one of the two subclauses, ROWS or RANGE. The basic syntax for both of these subclauses is essentially the same:
ROWS BETWEEN lower_bound AND upper_bound
RANGE BETWEEN lower_bound AND upper_bound
And in some cases, they might even return the same result. However, there’s an important difference.
In the ROWS subclause, the frame is defined by beginning and ending row positions. Offsets are differences in row numbers from the current row number.
As opposed to that, in the RANGE subclause, the frame is defined by a value range. Offsets are differences in row values from the current row value.
Types of SQL Window Functions
Window functions in SQL Server are divided into three main types: aggregate, ranking, and value functions. Let’s have a brief overview of each.
Aggregate Window Functions
AVG(): returns the average of the values in a group, ignoring null values.
MAX(): returns the maximum value in the expression.
MIN(): returns the minimum value in the expression.
SUM(): returns the sum of all the values, or only the DISTINCT values, in the expression.
COUNT(): returns the number of items found in a group.
STDEV(): returns the statistical standard deviation of all values in the specified expression.
STDEVP(): returns the statistical standard deviation for the population for all values in the specified expression.
VAR(): returns the statistical variance of all values in the specified expression; it may be followed by the OVER clause.
VARP(): returns the statistical variance for the population for all values in the specified expression.
Sample query:
SELECT name, salary,
SUM(salary) OVER (PARTITION BY dept) AS dept_total,
AVG(salary) OVER (PARTITION BY dept) AS dept_avg
FROM employees;
Ranking Window Functions
Used to assign rank or position within partitions.
ROW_NUMBER(): assigns a unique sequential integer to rows within a partition of a result set.
RANK(): assigns a unique rank to each row within a partition with gaps in the ranking sequence when there are ties.
DENSE_RANK(): assigns a unique rank to each row within a partition without gaps in the ranking sequence when there are ties.
PERCENT_RANK(): calculates the relative rank of a row within a group of rows.
NTILE(): distributes rows in an ordered partition into a specified number of approximately equal groups.
Sample query:
SELECT name, salary,
RANK() OVER (PARTITION BY dept ORDER BY salary DESC) AS dept_rank
FROM employees;
Offset(Value) Window Functions
Used to access data from other rows.
LAG(): retrieves values from rows that precede the current row in the result set.
LEAD(): retrieves values from rows that follow the current row in the result set.
FIRST_VALUE(): returns the first value in an ordered set of values within a partition.
LAST_VALUE(): returns the last value in an ordered set of values within a partition.
NTH_VALUE(): returns the value of the nth row in the ordered set of values.
CUME_DIST(): returns the cumulative distribution of a value in a group of values.
Sample Query:
SELECT date, revenue,
LAG(revenue, 1) OVER (ORDER BY date) AS prev_month,
revenue - LAG(revenue, 1) OVER (ORDER BY date) AS change
FROM monthly_sales;
Summary
SQL window functions provide a powerful analytical layer within standard SQL, enabling complex calculations across related rows while preserving row-level granularity. Unlike GROUP BY, they do not collapse result sets, which makes them ideal for scenarios requiring both detail and aggregate insight in the same query.
The OVER() clause is central, with PARTITION BY defining logical groups, ORDER BY controlling calculation sequence, and optional frame specifications (ROWS or RANGE) refining scope.
Key functional categories include aggregate window functions for running totals and moving averages, ranking functions such as ROW_NUMBER() and RANK() for ordered comparisons and offset functions like LAG() and LEAD() for time-series or sequential analysis.
When used correctly, window functions significantly reduce query complexity, eliminate the need for self-joins in many analytical patterns and improve expressiveness in reporting and business intelligence workloads.
In the previous chapters, we learned how to have proper customer conversations — avoiding compliments, digging into specifics, and not pitching too early. But here’s a question that kept bugging me: How do I know if a meeting actually went well?
Chapter 5 answers exactly that. And the answer is brutally simple: a meeting went well only if it ends with a commitment.
Outline
In this post, I’ll break down Chapter 5 into the following sections:
There’s No Such Thing as a Meeting That “Went Well” — Why every meeting either succeeds or fails, and how compliments trick you into thinking you’re making progress.
Commitment and Advancement: Two Sides of the Same Coin — The two key concepts of the chapter and why they always come together.
The Currencies of Commitment — The three types of commitment (Time, Reputation, Money) and how they escalate in seriousness.
The Spectrum: From Zombie Lead to Committed Customer — How to read the signals and know exactly where you stand with a potential customer.
Why We Don’t Ask for Commitments (And Why We Should) — The two traps that prevent us from getting real signals: fishing for compliments and not asking for next steps.
The “Crazy” First Customers: Your Early Evangelists — Why your first customers won’t be “normal” buyers, and why that’s a feature, not a bug.
How to Push for Commitment Without Being a Used Car Salesman — A practical framework for asking for commitments without feeling pushy.
Don’t Ask for Commitment Too Early — Why timing matters and how to match your ask to the stage of the relationship.
There’s No Such Thing as a Meeting That “Went Well”
This was a mindset shift for me. I used to walk out of meetings thinking “That went great! They loved the idea!” — and then… nothing happened. No follow-up, no next steps, just silence.
Fitzpatrick puts it bluntly:
Every meeting either succeeds or fails.
A meeting fails when you leave with:
A compliment: “That’s a really cool idea!”
A stalling tactic: “Let’s circle back after the holidays.”
A meeting succeeds when you leave with:
A commitment to the next step
Something concrete that advances the relationship forward
The tricky part? The subtle stalls don’t feel like rejection. “We should definitely talk again soon” sounds positive, but it’s just a polished version of “Don’t call me, I’ll call you.”
Rule of Thumb: If you leave a meeting feeling good but without a concrete next step, you probably got played by a compliment, not a commitment.
Commitment and Advancement: Two Sides of the Same Coin
Fitzpatrick introduces two key concepts:
Commitment — When someone gives you something they value. This proves they’re serious and not just being polite.
Advancement — When the relationship moves to the next concrete step in your sales or learning process.
These two almost always come together. To advance to the next step, someone has to commit something. And if someone commits something, the process naturally advances.
For example: You want to demo your product to a company’s decision-maker. To get that meeting (advancement), your current contact needs to introduce you to their boss (reputation commitment). One doesn’t happen without the other.
Rule of Thumb: Commitment and advancement are functionally the same thing. If you’re getting one, you’re usually getting both. If you’re getting neither, the meeting failed.
The Currencies of Commitment
Not all commitments are created equal. Fitzpatrick breaks them down into three “currencies” — and they escalate in seriousness:
1. Time Commitment
This is the lightest form. The person is investing their time to engage with you further.
Examples:
Agreeing to a follow-up meeting with clear next steps
Sitting down for a longer, deeper conversation
Trying out your prototype or beta and giving feedback
Coming to your office (or going out of their way) for a meeting
If someone won’t even give you another 30 minutes of their time, that’s a pretty clear signal.
2. Reputation Commitment
This is heavier. The person is putting their name and credibility on the line for you.
Examples:
Introducing you to their boss or a decision-maker
Introducing you to a peer or potential customer
Giving you a public testimonial or case study
Posting about you on social media or their company Slack
When someone introduces you to their boss, they’re essentially saying “I believe in this enough to risk looking stupid if it doesn’t work out.” That’s real skin in the game.
3. Financial Commitment
The ultimate signal. Money talks, everything else walks.
Examples:
A letter of intent (LOI) or pre-order
A deposit or partial payment
Pre-paying for the product before it’s built
If someone says “I’d definitely pay for that” — that means nothing. If someone says “Here’s $500, let me know when it’s ready” — that means everything.
Rule of Thumb: The more someone gives you (time → reputation → money), the more seriously you can take their signal. Compliments cost nothing. Commitments cost something. That’s the whole difference.
The Spectrum: From Zombie Lead to Committed Customer
Fitzpatrick describes a spectrum of signals you might get from potential customers, and it’s incredibly useful for figuring out where you actually stand:
Cold signals (the meeting failed):
“That’s cool, I like it” → compliment, worthless
“Looks interesting, keep me in the loop” → polite brush-off
“Let’s grab coffee sometime” → stalling, no specifics
No follow-up after the meeting → they forgot you exist
Warm signals (getting somewhere):
“Can you show this to my team next Tuesday?” → time + reputation commitment
“Send me the beta link, I’ll try it this week” → time commitment with a deadline
“Let me introduce you to our Head of Product” → reputation commitment
Hot signals (you’re onto something):
“How much would this cost? Can we do a pilot?” → moving toward financial commitment
“We’d like to pre-order 50 licenses” → money on the table
“Here’s a deposit, build it” → they’re all in
Rule of Thumb: If you can’t tell where someone falls on this spectrum, you didn’t push hard enough for a commitment at the end of the meeting.
Why We Don’t Ask for Commitments (And Why We Should)
So if commitments are so important, why don’t we ask for them? Fitzpatrick identifies two main traps:
Trap 1: You’re Fishing for Compliments
Instead of asking “Would you be willing to pay for this?” or “Can I show this to your boss?”, we ask soft questions like:
“What do you think of the idea?”
“Would you use something like this?”
These questions are begging for a compliment, not a commitment. And guess what? People are happy to give you a compliment because it costs them nothing and gets you out of the room.
Trap 2: You’re Not Asking for Next Steps
The meeting is going well. You’re vibing. You’re having a great conversation. And then… you just let it end. No ask. No push. You walk away with warm feelings and zero concrete progress.
This is fear dressed up as politeness. We don’t want to be “pushy” so we don’t ask. But here’s the thing — if your product is genuinely solving their problem, asking for a next step isn’t pushy. It’s helpful.
Rule of Thumb: Always know what commitment you want before the meeting starts. Then ask for it before the meeting ends. If you don’t ask, you won’t get it. Period.
The “Crazy” First Customers: Your Early Evangelists
Fitzpatrick makes an important point about who your first customers will be. They won’t be normal, rational, cautious buyers. Your first customers will be a little bit “crazy” — and that’s a good thing.
Your early evangelists typically:
Have the problem right now, not “someday”
Know they have the problem — they’re not in denial
Have already tried to solve it (maybe with spreadsheets, duct tape, or a competitor)
Have the budget or authority to actually pay for a solution
Are desperate enough to try an unfinished, unpolished product from an unknown startup
Think about it: a normal person wouldn’t use a half-built product from two people in a garage. But someone who’s in pain RIGHT NOW and has been looking for a solution? They’ll tolerate bugs, missing features, and a terrible UI — because you’re solving their burning problem.
These people are gold. They give you real feedback, real money, and real validation.
Rule of Thumb: If you can’t find anyone who’s desperate enough to use your product in its current state, you either haven’t found your real customer segment, or you’re not solving a painful enough problem.
How to Push for Commitment Without Being a Used Car Salesman
A common fear: “But I don’t want to be pushy!”
Fitzpatrick’s answer: you’re not being pushy if you’re genuinely trying to help. Here’s his framework:
Know your ask before the meeting. What’s the ideal next step? An intro to the boss? A pilot program? A pre-order? Know this going in.
Ask at the end of the meeting. Don’t let the meeting fizzle out. Before wrapping up, clearly state what you’d like to happen next.
Accept the answer gracefully. If they say no, that’s actually great information. A clear “no” is infinitely more useful than a wishy-washy “maybe.” At least now you know where you stand.
Interpret the response honestly. If they dodge, stall, or give you a compliment instead of a commitment — recognize it for what it is. Don’t lie to yourself.
Examples of good asks:
“Would you be willing to do a trial run with your team next month?”
“Could you introduce me to [decision-maker] so I can understand their perspective?”
“If we build this by March, would you commit to being a pilot customer?”
“Can I get a letter of intent so we can prioritize building this feature for you?”
Rule of Thumb: If you’re afraid to ask for a commitment because you think the person will say no — that’s exactly why you need to ask. A “no” now saves you months of chasing a dead lead.
Don’t Ask for Commitment Too Early
Here’s the balance: pushing for commitment is essential, but timing matters.
If you push for money or a huge commitment during what’s supposed to be an early learning conversation, you’ll scare people away. The first few conversations should be about learning — understanding their problem, their workflow, their pain.
Once you’ve validated the problem and have something to show (even a rough prototype), THEN you start pushing for commitments.
The progression looks like this:
Early conversations: Learn about the problem. No pitch, no ask. Just listen.
Problem validated: Start showing your solution concept. Ask for time commitments (follow-up meetings, beta testing).
Solution takes shape: Push for reputation commitments (introductions, referrals).
Product is tangible: Push for financial commitments (pre-orders, deposits, LOIs).
Skipping steps or pushing too hard too early is just as bad as never pushing at all.
Rule of Thumb: Match your ask to the stage of the relationship. Early = learn. Middle = time and reputation. Late = money.
Key Takeaways from Chapter 5
Let me sum up the core lessons:
Meetings don’t “go well.” They either produce a commitment or they fail. Stop fooling yourself with compliments.
Commitments come in three currencies: Time, Reputation, and Money — in escalating order of seriousness.
Always push for a next step. Know your ask before the meeting and make it before the meeting ends.
Compliments ≠ Commitments. “That’s a great idea” is worthless. “Here’s my credit card” is priceless.
Your first customers will be “crazy.” They have the problem now, they know it, and they’re desperate enough to use your unfinished product.
A “no” is better than a “maybe.” Rejection gives you clarity. Wishy-washiness wastes your time.
Match your ask to the stage. Don’t ask for money when you should be asking questions. Don’t ask for opinions when you should be asking for money.
This is part of my series where I break down each chapter of The Mom Test by Rob Fitzpatrick. If you’re building a product and talking to customers, this book is essential reading.
Previously: Chapter 4 – Why You Should Keep Customer Conversations Casual Next up: Chapter 6 – Finding Conversations
“I want to deploy to AWS, but writing CloudFormation YAML is a pain…” “Azure has too many configuration options…” Sound familiar?
I had the same frustrations until I tried WinClaw‘s cloud auto-deploy skills. Just by having a conversation with AI, I got my app deployed to AWS, Azure, and Alibaba Cloud — fully automated.
What is WinClaw Cloud Deploy?
WinClaw is a free, open-source AI development tool with three cloud deployment skills:
Phase 3D — Report generation (access URL, cost estimate)
Same Experience on Azure & Alibaba Cloud
Azure generates ARM templates and selects from App Service / VM / AKS / Functions.
Alibaba Cloud generates ROS templates and selects from ECS / FC / ACK. Supports China’s MLPS 2.0 compliance.
Getting Started
WinClaw is completely free and open-source:
GitHub Repository
SourceForge Downloads
Use GLM-5 (free) as the LLM backend:
set ANTHROPIC_BASE_URL=https://api.z.ai/api/anthropic
set ANTHROPIC_AUTH_TOKEN=your-api-key
set ANTHROPIC_MODEL=glm-5
Conclusion
WinClaw’s cloud deploy skills dramatically lower the barrier to infrastructure setup. Just answer questions about budget and traffic, get an optimal architecture proposal, and watch as everything deploys automatically. Once you try it, there’s no going back. Give it a shot!
When you’re new to Go, error handling is definitely a paradigm shift that you need to come to terms with. Unlike in other popular languages, in Go, errors are values, not exceptions. What this means for developers is that you can’t just hide from them – you have to handle errors explicitly and at the point of the call. That equals a lot of if err != nil { return err }. But more importantly for us now, since errors are values, they can also be passed around, inspected, and composed like any other variable. This opens the door to many security issues if you’re not careful.
This guide walks you through best practices for secure error handling in Go. We’ll look at the reasons why it’s so important, how it affects security, and how to securely create, wrap, propagate, contain, and log errors. We’ll also provide a checklist on how to handle specific Go errors securely.
Bear in mind that this is an article on the security aspect of error handling in Go, so it focuses on best practices and user-facing messages. If you’re looking for a primer on general error handling mechanics in Go, check out our exhaustive How to Handle Errors in Go tutorial.
Why does secure error handling matter in Go?
To be precise, secure error handling matters in all programming languages, but with Go, errors carry particular weight.
For one thing, Go services often run in highly security-sensitive and distributed environments. A lot of Go is used for writing APIs, cloud services, and microservices – types of infrastructure with significant potential for security breaches that carry severe consequences and, due to their distributed nature, can have a rippling effect.
On the other hand, as already hinted in the introduction, the error-handling paradigm in Go makes developers somewhat vulnerable to disclosing sensitive information, such as paths, SQL queries, credentials, identifiers, or stack traces. Meanwhile, if you look at typical guides on error handling in Go, they seem to overlook the critical security aspect of containing and sanitizing your errors. Instead, they will teach you how to be specific and explicit, so that errors can be logged properly and debugged efficiently. But what happens if you expose these verbose errors to clients at runtime?
That’s how Go errors leak internal information
Errors in Go are values like any other, just with the error type. You decide what to do with that value, and so your program’s security depends entirely on how you create and expose errors.
If you fail to contain and sanitize them, you expose your app to a torrent of security issues, ranging from the disclosure of personally identifiable data to enumeration attacks. Take the recent example of CVE-2025-7445, a vulnerability in Kubernetes that allowed actors with access to the secrets-store-sync-controller logs to observe service account tokens in specific error-marshalling scenarios.
This shows that error handling in Go requires caution and sound design choices. But when done right, it pays off with improved API safety, clean logs, and better resistance to hacks.
Secure patterns for error creation and wrapping in Go
Now that we’ve covered why secure error handling is so important, let’s see how to design errors in Go without exposing sensitive information.
To have secure code, you need to treat errors as data objects that require sanitization. But to have practical code, you need enough information to debug it when problems arise. By adhering to the following three principles, you can achieve both.
Split brain (but a good one)
The most effective way to prevent accidental information leaks is to formalize the distinction between what the system sees and what the user sees. Relying on ad-hoc string manipulation at the level of HTTP handlers? I think you’ll agree that approach is prone to human error. So, instead, you need to define a custom error type that enforces this separation at the compile level.
It can look something like this: You create a struct that encapsulates both the Internal (unsafe) and the Public (safe) message.
package secure
// SafeError implements the error interface but keeps secrets internal.
type SafeError struct {
// Machine-readable code for clients (e.g., "RESOURCE_NOT_FOUND")
Code string
// Human-readable message safe for public consumption
UserMsg string
// The raw, upstream error (DO NOT expose this via API)
Internal error
// Context map for structured logging (sanitized)
Metadata map[string]string
}
// Error satisfies the stdlib interface.
// CRITICAL: This returns the SAFE message, not the internal one.
// This prevents accidental leaks if the error is printed directly to an HTTP response.
func (e *SafeError) Error() string {
return e.UserMsg
}
// LogString returns the detailed string for your SRE team.
func (e *SafeError) LogString() string {
return fmt.Sprintf("Code: %s | Msg: %s | Cause: %v | Meta: %v",
e.Code, e.UserMsg, e.Internal, e.Metadata)
}
You can check out this Go error library by Cockroach Labs to see a real-life implementation of this principle and read an interesting article on how they approach logging and error redaction for additional inspiration.
Why is this more secure?
Let’s say a developer accidentally passes the above error to http.Error(w, err.Error(), 500). The user will only see the sanitized UserMsg, but the sensitive SQL syntax error or upstream timeout token will remain hidden inside the struct. They’re accessible through the LogString() method used by your logging middleware.
Contextual sanitization
Errors rarely happen in a vacuum, so you need context (variables, IDs, inputs) to debug. But blindly adding context is how sensitive data leaks into the logs.
This is what you don’t do:
// DANGEROUS: Logging raw input structures
if err != nil {
return fmt.Errorf("login failed for request %v: %w", authRequest, err)
}
// If authRequest contains a 'Password' field, you just wrote it to disk.
And this is what you do instead – use a builder pattern or helper function that explicitly allows lists of safe metadata fields:
By using an explicit builder pattern or helper function, you force yourself to inspect everything and choose what gets logged rather than defaulting to “everything”.
Opaque wrapping
Standard wrapping using fmt.Errorf("... %w", err) creates a chain. While excellent for debugging, this allows errors.Is and errors.As (from version 1.26 errors.AsType as well) to traverse down to the root cause. In high-security contexts, you may want to prevent the caller from introspecting the underlying library entirely.
For that, you wrap the error in a way that captures the stack trace and context, but breaks the dependency chain for the caller.
func GetUserProfile(id string) (*Profile, error) {
// Imagine this returns a specific database error containing table names
// e.g., "pq: relation 'users_v2' does not exist"
user, err := db.QueryUser(id)
if err != nil {
// BAD: returns raw DB error.
// return nil, err
// BAD: wraps, but exposes the underlying type via Unwrap().
// return nil, fmt.Errorf("db error: %w", err)
// GOOD: Opaque wrapping.
// We log the raw error here or wrap it in a type that doesn't
// expose the cause via Unwrap() to the external world.
return nil, &SafeError{
Code: "FETCH_ERROR",
UserMsg: "Unable to retrieve user profile.",
Internal: err, // Stored for logs, hidden from Unwrap logic if needed
}
}
return user, nil
}
Why is this more secure?
By explicitly controlling how your custom error type implements (or doesn’t implement) Unwrap(), you act as a firewall. You ensure that a vulnerability in a third-party XML parser or SQL driver cannot be introspected or triggered by a malicious user manipulating inputs to check for specific error types.
Safe error propagation
Go is one of the most popular choices for distributed systems, like microservices, cloud functions, and APIs. In an environment like that, an error is not just a local event – it usually bubbles up somewhere upstream.
One of the most dangerous “security” habits in Go is letting errors bubble up unfiltered. Like when an error originating in the database layer is returned up the stack, function by function, until it’s serialized directly to the user’s screen. Then, instead of a simple File not found, unauthorized actors get access to your internal architecture – file paths, library versions, IP addresses, and schema details.
That’s why when working with distributed architectures, proper error containment is a top priority for security. Depending on which trust boundary the data crosses, we can distinguish three distinct levels of containment and patterns to deal with it.
Crossing subsystem boundaries
Sanitize your data when it crosses subsystem boundaries, like when it moves from a data access layer (DAL) to a business logic layer (BLL). If your database fails, the BLL doesn’t need to know why it happened, only that it did. Wrap the raw error in a domain-specific one, for example:
Raw: pq: duplicate key value violates unique constraint "users_email_key"
Sanitized: domain.ErrDuplicateUser (wrapping the raw cause)
Otherwise, you’re risking leaking implementation details, such as revealing that you’re using PostgreSQL rather than MongoDB.
Crossing API boundaries
Translate your error in service-to-service communication, like billing calling your auth service. Convert Go error types into standardized protocol errors (gRPC status codes or standard JSON error responses). The upstream service only needs to know how to react, not which line of code broke.
Not translating errors can result in cascading failures and risks exposing stack traces to other services that don’t need to know the ins and outs of your code.
// BillingService → AuthService call
resp, err := s.auth.ValidateToken(ctx, token)
if err != nil {
var authErr *secure.SafeError
if errors.As(err, &authErr) {
// Translate domain error → protocol
return nil, &secure.SafeError{
Code: "AUTH_UNAVAILABLE",
UserMsg: "Authentication service is temporarily unavailable.",
Internal: err, // keep original cause for logs
Metadata: map[string]string{"svc": "auth"},
}
}
// Unknown error → generic translation
return nil, &secure.SafeError{
Code: "INTERNAL",
UserMsg: "Internal service error.",
Internal: err,
}
}
Crossing public boundaries
Wrap your errors in generic messages when crossing public boundaries, like from your public API gateway to the end user. They should never see a generated error message, only a static, pre-defined string or code (like Service temporarily unavailable. Request ID: abc-123, not Connection timeout to redis-cluster-01 at 10.0.1.5:6379). Otherwise, you risk giving attackers hints for SQL injection, path traversal, or denial of service (DoS) attacks.
// Handler serves the HTTP request
func (s *Server) HandleCreateOrder(w http.ResponseWriter, r *http.Request) {
// 1. Execute Logic
// Errors bubble up, containing stack traces and SQL details
err := s.orders.Create(r.Context(), reqBody)
if err != nil {
// 2. Log the "Truth"
// We log the FULL internal error for the security/dev team
s.logger.Error("failed to create order", "error", err, "stack", stack.Trace(err))
// 3. Contain and Translate for the User
// We never just write 'err.Error()' to the response writer.
translateAndRespond(w, err)
return
}
w.WriteHeader(http.StatusCreated)
}
func translateAndRespond(w http.ResponseWriter, err error) {
var status int
var publicMsg string
// We inspect the error type or sentinel value to decide the "Public Face" of the error
switch {
case errors.Is(err, domain.ErrInvalidInput):
status = http.StatusBadRequest
publicMsg = "The provided order details are invalid."
case errors.Is(err, domain.ErrConflict):
status = http.StatusConflict
publicMsg = "This order has already been processed."
case errors.Is(err, context.DeadlineExceeded):
status = http.StatusGatewayTimeout
publicMsg = "The request timed out."
default:
// CATCH-ALL: The most important security catch.
// If we don't recognize the error, we assume it's sensitive internal state.
status = http.StatusInternalServerError
publicMsg = "An internal error occurred. Please contact support."
}
http.Error(w, publicMsg, status)
}
Logging errors without leaking sensitive data
Even internal logs should be sanitized in anticipation of a possible leak. You should move from the mindset of “logging everything” to only “logging safe context that’s needed.” Here are some key rules when it comes to logging errors securely:
1. Use structured logging
Stop using fmt.Printf or string concatenation. Use a structured logger (like Go’s standard log/slog or libraries like zap and zerolog). Structured logging treats log parameters as typed data, not raw strings. This significantly reduces the risk of log injection attacks because the logger handles the escaping of special characters.
2. Sanitize before logging
Never log a struct directly unless you have verified it contains no personal data. Instead, use a pattern where you explicitly map only the fields required for debugging (see the Contextual sanitization section above).
3. Redact at middleware
For data that must be logged but contains sensitive parts (like a full HTTP request for debugging), implement a Redactor interface.
func LogRequest(r *http.Request) {
// Basic scrubbing of common sensitive headers
safeHeaders := r.Header.Clone()
safeHeaders.Del("Authorization")
safeHeaders.Del("Cookie")
slog.Info("incoming request",
slog.String("path", r.URL.Path),
slog.Any("headers", safeHeaders), // Safe to log now
)
}
4. Check everything
Security relies on consistency, but we humans are notoriously inconsistent. Use your IDE to catch the insecure logging patterns before they compile. Some features that are helpful for secure error handling in GoLand are:
Printf validation: GoLand detects if the arguments passed to a formatting function don’t match the verbs, reducing the risk of accidental data leaks through malformed strings.
Taint analysis: Through data flow analysis, GoLand can track variables from untrusted sources (like HTTP bodies) and warn you if they are being used in dangerous sinks (like raw string concatenation in logs) without sanitization.
Time to check your codebase
If you feel like any of these golden rules are news to you, maybe it’s time to do a security audit of your codebase. To make it easier for you, here’s a checklist of some questions that you may ask yourself about how your application handles errors with best practices for different scenarios.
Security audit checklist
Question
If yes → use
If no → then
Is the caller external or untrusted?
Translate error to generic response
Propagate/wrap internally
Does the error contain sensitive data?
Redact and sanitize before logging
Log normally (structured)
Did the error come from an upstream service or library?
Wrap and sanitize
Propagate internally
Will the error cross a trust boundary (API/gateway)?
Replace with a safe message
Keep internal context
Is the error caused by malformed or unsafe input?
Fail fast and stop processing
Validate and continue
Is this a recoverable business error?
Return a safe user-facing message
Consider fail-fast behavior
Is the system in an inconsistent or corrupted state?
Fail secure (panic and recover safely)
Continue only if certain that the system is not corrupted
Does the error need to be logged?
Log sanitized version
Avoid logging unnecessary details
Will developers need internal details for debugging?
Store internal details in logs only
Keep client response generic
Is the error part of a recurring security pattern (auth/permission)?
Use standard codes/responses
Avoid making new response formats
Frequently asked questions
Can I return err.Error() directly to API clients?
No. err.Error() is designed for debugging by developers. It can leak implementation and structure information to hackers.
What is the safest way to return errors in Go APIs?
You should return structured, sanitized protocol errors that provide just enough information for the client to react, while keeping technical details hidden.
How do I prevent Go errors from leaking sensitive information?
First and foremost, decouple system information from user-facing messaging and never provide raw errors to end users. Know when data crosses boundaries and only provide as much context as needed to resolve the issue. If sensitive data must be logged, redact it.
How can Go services safely log errors without exposing secrets?
Shift your mindset from “log everything” to “sanitize everything”. You should make sure that your logs are rich enough to debug issues, but sterile enough that the system and users won’t be compromised if leaked.
What is the difference between propagating and translating errors in Go?
When you propagate an error, you run it up the call stack (usually wrapped in context with %w). This preserves the details and stack trace for easier debugging.
Translating an error means catching and replacing it with a different, domain-specific error (like swapping an sql.ErrNoRows for a UserNotFound) to hide implementation details from the caller.
A good rule of thumb for security is propagating errors internally between subsystems and translating them at the API boundary to prevent leaks.
When should a Go application fail fast for security reasons?
An app should fail fast for security reasons if it detects conditions that compromise trust, integrity, or confidentiality. Some scenarios where this might be applicable are: authentication failure, insecure input (like known SQL injection patterns), resource exhaustion (early sign of DoS attack) – fail fast, don’t panic; integrity check failure or tampered configuration – panic.
How do you design secure user-facing error messages in Go?
Use a custom error type that holds both private error details and safe public messages. Only return public messages to the client. Make sure they are generic, opaque, and standardized. Never provide specific technical details and only provide safe context to the extent that it’s necessary for tracing.
How should upstream service or database errors be handled securely in Go?
Upstream service and database errors must be handled securely by containing and translating them at the service boundary to prevent information leakage.
Containment means that raw errors should not be propagated across service or API trust boundaries. Translation means that raw errors should be mapped to generic, domain-specific errors defined in the service.
What are common security mistakes in Go error handling?
Most security mistakes when it comes to error handling in Go boil down to over-exposure of internal details. Common mistakes include:
Propagation of raw errors across trust boundaries.
Accidentally logging secrets.
Exposing raw stack traces or verbose internal error messages to end users.
Relying on a generic handler that returns err.Error(), instead of custom error types.
How can GoLand help detect insecure error patterns?
GoLand can help you detect insecure error patterns primarily through static code analysis (inspections) and data flow analysis. Here are some key detection features you might be interested in:
Detection of unhandled errors: GoLand automatically flags functions that return an error but have been called without checking it. Proceeding with an operation when a check has failed (or didn’t occur at all) might result in an authentication bypass – the program serving sensitive data to an unauthenticated user.
Detection of nil pointer deference and data flow analysis: GoLand tracks how nil values move across functions and files to warn you about a potential nil variable. It also reports instances where variables might have nil or an unexpected value because an associated error was not checked for being non-nil. Unchecked nil variables can cause a panic that results in an inconsistent state or be exploited in DoS attacks.
Resource leak inspection: Resource leak analysis in GoLand analyzes your code locally to ensure that any object implementing io.Closer is properly closed. Resource leaks pose a security threat because, when exploited, they are a gateway for DoS attacks.
Package Checker: This plugin analyzes third-party dependencies for known vulnerabilities and updates them to the latest released version. This protects you from known exploits and helps you remain compliant with regulatory requirements.
Type assertion on errors: GoLand reports type assertion or type switch on errors, for example, err.(*MyErr) or switch err.(type), and suggests using errors.As instead.
errors.AsType: After the introduction of errors.AsType in Go 1.26, GoLand reports usages of errors.As that can be replaced with this generic function that unwraps errors in a type-safe way and returns a typed result directly.
Lately I’ve decided to keep mental notes of everyday concepts I use in my work. Most times I have a good overview of what a concept does but I never really understood what goes on under the hood. I mean, it works innit. But I’m taking a step further with some of those concepts, and today’s subject is all about Client Side Rendering(CSR) and Server Side Rendering(SSR).
Before SSR was a thing
We all know how wonderful Single Page Applications(SPAs) are, but to really appreciate SSR, we need to go back a little.
So basically, the traditional way of displaying content on the internet is this little relationship that happens between the client, which is your browser and the server, whatever web hosting platform your content lives on.
The browser asks the server for a page, the server returns an index.html with a <script> tag at the bottom pointing to a main.js file, the browser then makes another request to fetch that main.js and executes it.
This is where React takes over. It runs your components, converts JSX(JavaScript XML) into JavaScript objects, what we call the Virtual DOM, and then converts those JavaScript objects into actual DOM nodes the browser can display.
So your JSX that looks like this:
jsx
`<h1 className="title">Hello World</h1>`
gets converted into a Javascript object like this:
That’s your virtual DOM right there, just a plain JavaScript object describing what the UI should look like. React then takes that object and creates an actual DOM node from it:
And appends it to the #root div. Boom, the browser displays it. That’s basically CSR.
State updates and reconciliation
Now a huge part of any UI is state: counter, cart items, form inputs, all of that. When state updates, React doesn’t re-download main.js and redo everything from scratch; that would be crazy expensive.
What React actually does is look at the component where the update happened, build a new virtual DOM, compare it to the previous one, find what changed, and update only those nodes in the actual DOM. The process is called reconciliation and it’s honestly wild how all of this happens without a full page refresh.
And that whole process I just described, the browser downloading JS, React building the virtual DOM, reconciliation and all — that’s Client Side Rendering.
Then SSR came along
SSR is so beautiful because it offloads a huge chunk of this work from the browser and just does it on the server instead.
Now “the server” can sound confusing, but here is the thing, JavaScript can run in two primary environments: the browser and Node.js. So in SSR, it’s Node.js doing all the heavy lifting.
Here’s what happens when you request a page:
The Node.js server runs React, calls your component function, parses JSX, builds the virtual DOM and converts everything into readable HTML, then sends that fully structured HTML straight to the browser. The browser receives real HTML and can display it immediately, no waiting for JS to build everything from scratch.
html
<div id="root">
<h1>Hello</h1>
<p>This came from the server</p>
</div>
But that’s not the end of it. In the background the JavaScript bundle also loads, and once it does, hydration happens.
Okay but what even is hydration?
Hydration is basically the process of attaching interactivity, specifically event listeners, to the HTML the server already sent. Think of it like the server sending you a fully built house, and hydration is the electrician coming in afterwards to wire everything up.
In CSR, React uses createRoot which builds everything from scratch. In SSR, React uses hydrateRoot which assumes the DOM already exists, all it does is recreate the virtual DOM and attach event handlers to the right elements.
js// CSR - builds from scratch
ReactDOM.createRoot(document.getElementById("root")).render(<App />);
// SSR - assumes HTML exists, just wires it up
ReactDOM.hydrateRoot(document.getElementById("root"), <App />)
So How does this affect how I write Next.js Code
Next.js does SSR by default. Every component runs on the server first, even ones marked with "use client" directive. The difference is that "use client" tells the compiler that this component needs to be hydrated on the client so it can handle interactivity.
So the rule is simple, anywhere you need state, effects, browser APIs like document.querySelector or even event listeners, add "use client" at the top of the file. If you don’t, your code will be trying to access browser APIs that don’t exist on the server and you will be staring at errors wondering what went wrong.
Honestly this is the part that changed how I think at work. Once you understand CSR and SSR, you stop throwing use client everywhere and start asking, does this component actually need to run on the client?
The mistake I made early on was throwing use client at the top of every component. It just felt like the thing to do coming from plain React. But once I understood what was actually happening under the hood, I realized I was basically opting out of SSR everywhere and losing all the performance benefits for free.
A good rule of thumb, if a component just displays data with no clicks, no state, no user interaction, leave it as a server component. Only reach for use client when the component actually needs to respond to the user. That way the browser only handles what it truly needs to.
Wrapping up
It’s honestly crazy how much is happening under the hood every time a page loads. Understanding this whole flow — CSR, virtual DOM, reconciliation, SSR, hydration — really changed how I think about building components and where I put my logic.
Let me know what part clicked for you or if there’s anything you’d push back on — I’d really love to hear it.
See you next time. Byeeee