In our previous post, The High Cost of Cutting Corners in ROS 2 Testing, we discussed a pattern we see frequently in real ROS 2 projects: testing is often deprioritized until failures appear, usually late in development, when fixing them is expensive and disruptive.
While tests are a critical part of the solution, many of these issues originate earlier. They are often rooted in how code is structured, reviewed, and validated from the start.

This post focuses on the foundations that make testing effective in ROS 2 projects, starting with a shared quality baseline and continuing with design decisions that directly impact testability.
Static analysis: the quality baseline
Before thinking about unit tests, integration tests, or testable design, there is a more fundamental requirement: a consistent and predictable codebase.
Static analysis tools provide this baseline. They don’t validate behavior, but they ensure that all developers follow the same rules and conventions, reducing noise and friction long before tests are involved, and making the code easier to read and maintain.
In ROS 2 projects, static analysis usually falls into three complementary categories.
-
Formatters are responsible for code layout and style. They enforce consistent indentation, spacing, brace placement, and line length, among other things. Tools like clang-format ensure that formatting decisions are automated instead of debated in reviews. This is especially important in collaborative projects to move faster in development.
-
Linters focus on style and common mistakes. They catch issues such as naming inconsistencies, missing includes, or suspicious constructs that are technically valid but error-prone. In ROS 2, tools like cpplint and XML linters for package.xml files help enforce conventions across both code and configuration.
-
Static analyzers go a step further by inspecting code for potential bugs and risky patterns without executing it. Tools such as cppcheck or clang-tidy can detect issues like uninitialized variables, incorrect memory usage, or problematic control flow that may not be caught by the compiler. They are more computationally expensive, so they are normally run weekly on CI.
Together, these tools help teams catch a wide range of issues early and consistently. In ROS 2 projects, this is made easier by ament_lint_auto, which provides a standardized way to enable formatters, linters, and static analyzers using already configured defaults. This lowers the barrier to adoption and encourages teams to rely on shared conventions instead of custom, ad-hoc setups.
By integrating these checks early—both locally and in CI—teams remove subjective decisions from code reviews and prevent style and low-level issues from leaking into higher-level discussions. Static analysis doesn’t validate correctness, but by enforcing consistency and catching common problems, it creates a solid foundation on which testable and maintainable designs can be built.
An effective complement to this setup is the use of pre-commit hooks. Tools like pre-commit allow teams to run formatters and linters automatically before code is committed. This helps catch formatting and style issues early, keeps commits clean, and reduces unnecessary feedback during code reviews.
Pre-commit hooks are not a replacement for CI, but a lightweight first line of defense that reinforces consistency in day-to-day development.
Why testing still feels hard in many ROS 2 projects
Even with good tooling in place, testing in ROS 2 can feel heavy. Tests may require launching nodes, spinning executors, and managing timing just to validate relatively simple behavior.
This friction is usually not caused by ROS itself. It is caused by tight coupling between logic and middleware. When computation, communication, configuration, and state live in the same place, tests inherit that complexity.
At this point, the challenge is not deciding when to test, but whether the code can be tested at all without excessive effort.
The monolithic node pattern
A common starting point in ROS 2 is a node that owns everything: parameters, publishers, subscribers, internal state, and all the logic. Callbacks receive messages, perform computations, log information, and publish results.
This pattern is understandable and often works in early stages. Over time, however, it leads to code that is hard to validate in isolation. Tests become slow and fragile, and refactoring requires running large parts of the system just to gain confidence.
The problem is no longer a lack of tests — it’s a lack of testable structure.
Separate logic from middleware
One design decision has a major impact on testability:
Core logic should be independent of ROS.
ROS 2 is middleware. Its role is communication, configuration, and orchestration. Algorithms and decision-making logic should not depend on node APIs, publishers, or subscribers.
When logic is extracted into ROS-agnostic classes, it can be tested deterministically, without launching ROS or managing executors. The ROS node becomes a thin wrapper that translates between messages and the underlying logic.
For example, embedding filtering logic directly in a callback couples computation with communication:
void callback(const std_msgs::msg::Float64 & msg)
{
double filtered = 0.9 * prev_ + 0.1 * msg.data;
prev_ = filtered;
RCLCPP_INFO(this->get_logger(), "Filtered value: %f", filtered);
publisher_->publish(std_msgs::msg::Float64{filtered});
}
Extracting the logic clarifies responsibilities:
class LowPassFilter
{
public:
double update(double value)
{
prev_ = 0.9 * prev_ + 0.1 * value;
return prev_;
}
private:
double prev_{0.0};
};
The filter can now be tested independently of ROS, while the node focuses exclusively on interacting with the ROS graph.
Applying SOLID principles in ROS 2
Separating logic from middleware is not an isolated trick — it is a direct consequence of applying well-known software design principles. One useful mental model for this is SOLID, a set of five principles that guide the design of maintainable and extensible software systems.
SOLID is an acronym, and each letter represents a different principle. While all of them are valuable in general-purpose software, not all of them have the same impact in ROS 2 node design. In practice, two principles stand out as especially relevant when building testable ROS 2 systems: Single Responsibility and Dependency Inversion.
The Single Responsibility Principle encourages ROS nodes to focus on communication and lifecycle concerns, while core logic lives elsewhere. This keeps components small and easier to reason about.
The Dependency Inversion Principle prevents algorithms from depending directly on ROS-specific details. Logic that does not know about publishers, subscribers, or parameters is easier to test and reuse.
Following these principles enables the separation discussed earlier, but their benefits go beyond that. They make the codebase easier to extend, reduce the impact of future changes, and allow testing strategies to grow naturally as the system evolves.
From “working” to “testable”
Improving testability does not require rewriting existing nodes. Small, incremental changes—extracting logic from callbacks, simplifying node responsibilities, and keeping ROS-specific code at the boundaries—quickly add up.
Each step reduces coupling, shortens feedback loops, and lowers the long-term cost of change.
What’s next
This post covered the groundwork that makes testing viable in ROS 2 projects: establishing a shared quality baseline through static analysis, and applying design principles that keep logic independent from the middleware.
In the ROS 2 Testing: A Practical Survival Guide workshop, these ideas are taken one step further. We apply them in concrete scenarios, from unit testing ROS-agnostic logic to validating ROS interfaces, writing integration tests between nodes, and enforcing quality through continuous integration.
Effective testing in robotics is not about adopting a single methodology or tool. It’s about building code with clear boundaries, consistent structure, and design choices that make verification a natural part of development—not an afterthought.
👉 Ready to get started?
🔗 Explore the workshop materials and follow them at your own pace.
📩 Contact Ekumen if you want support designing or implementing a testing strategy for your product or team.