Educational

Cyclomatic Complexity Explained: Why It Matters and How to Reduce It

Cyclomatic complexity explained: what it measures, why it predicts technical debt and incidents, and how to reduce it without rewriting from scratch.

Code flow diagram showing cyclomatic complexity calculation with decision points

In this article:

What is cyclomatic complexity?

Cyclomatic complexity is a software metric introduced by Thomas J. McCabe in 1976. It measures the number of independent paths through a piece of code. In practical terms, it counts the number of decision points: if statements, else branches, case clauses, while and for loops, catch blocks and ternary operators. Each one adds one to the complexity score.

A function with no decision points has a cyclomatic complexity of 1. A function with a single if statement has a complexity of 2. A function with ten if statements, nested conditions and a loop has a complexity of 12 or higher, depending on the branching depth.

The metric is important because code complexity is not just a code quality metric in the abstract. It is a direct predictor of two engineering outcomes: the number of test cases required to achieve full branch coverage, and the probability that the function will contain defects. A function with cyclomatic complexity 15 requires at least 15 test cases to achieve full coverage. A function with complexity 30 is statistically three times more likely to contain bugs than a function with complexity 10.

For engineering leaders, cyclomatic complexity is one of the most useful code health metrics because it is objective, measurable and directly correlated with maintenance cost and incident risk.

Why cyclomatic complexity predicts problems

Code complexity predicts problems for three structural reasons.

Testability. Each independent path through a function needs to be tested. A function with complexity 20 needs at least 20 test cases to be fully covered. In practice, teams rarely achieve full coverage of complex functions, which means branches of logic go untested and produce unexpected behavior in production. The correlation between high complexity and low test coverage is strong and consistent.

Changeability. When a function has many decision points, a change to one branch can affect other branches in non-obvious ways. The developer making the change must hold the entire decision structure in mind to reason safely about the effect of the modification. This cognitive load increases with complexity and creates the “fear of touching it” dynamic that most engineers recognize from experience.

Defect density. Research consistently shows that functions with higher cyclomatic complexity contain more bugs per line of code. The relationship is not perfectly linear, but it is reliable: the most complex 5% of a codebase typically generates 25% to 40% of all production defects. This is the hotspot dynamic described in our tech debt solution assessments.

How to measure it in your codebase

Cyclomatic complexity is calculated automatically by most static analysis tools. SonarQube reports it at the function, class and module level and tracks trends over time. CodeClimate and Codacy provide similar functionality. For languages with dedicated tooling, such as Python’s radon, JavaScript’s ESLint with the complexity rule, or Java’s PMD, the calculation is available from the command line or as part of a CI check.

The most useful output is not an aggregate number but a ranked list: the ten functions with the highest cyclomatic complexity in the codebase, sorted by a combination of complexity score and change frequency. Functions that are both complex and frequently modified are the highest-risk targets for remediation.

Integrating the complexity check into the CI/CD pipeline creates a quality gate. A build that increases the maximum function complexity above a defined threshold (typically 15 for new code) fails automatically. This prevents new debt from being introduced at the function level without a deliberate decision.

The software health score in most tools includes cyclomatic complexity as a weighted component, making it part of the top-level metric that can be reported to leadership.

Thresholds and benchmarks

The standard thresholds for cyclomatic complexity, originally proposed by McCabe and subsequently validated empirically, are:

1 to 10: simple, easy to understand and test. Acceptable for most code. 11 to 15: moderate complexity. Should be reviewed but not automatically refactored. 16 to 25: high complexity. High defect risk. Refactoring is recommended before the function is modified again. Above 25: very high complexity. Urgent remediation priority. Functions in this range are statistically near-certain to contain bugs and will generate disproportionate maintenance cost.

In practice, a well-maintained codebase has a median cyclomatic complexity of 3 to 5 and a maximum of 15. A codebase with significant technical debt will have multiple functions above 20 and a median above 8. One client codebase we assessed had the top function at complexity 74, a function that had accumulated business logic over seven years without refactoring.

How to reduce cyclomatic complexity

Reducing code complexity does not require rewriting the function from scratch. The most reliable approach is Extract Method refactoring, applied incrementally.

The process: identify the most complex function in the codebase. Identify a logically cohesive block of statements within that function that could be named and extracted. Write tests for the current behavior of the function before making any changes. Extract the block into a new, named method. Verify that the tests still pass. Measure the complexity of both the original function and the extracted method.

Applied repeatedly, this process converts a single function with complexity 30 into five functions each with complexity 6, without changing the observable behavior. The original function becomes an orchestrator that delegates to the extracted methods.

The second technique is the replacement of nested conditionals with early returns (guard clauses). Instead of a deeply nested if-else structure, each pre-condition is checked at the start of the function and returns early if not met. This eliminates the nesting depth that drives high complexity scores.

The third technique is the introduction of polymorphism to replace complex switch or multi-branch if-else chains that dispatch behavior based on a type. Moving the conditional dispatch into a class hierarchy reduces the complexity of the original function to 1 while distributing the logic into small, testable classes.

Conclusion

Cyclomatic complexity is one of the most reliable code quality metrics available to engineering teams. It is objective, automated and directly correlated with defect risk, testability and maintenance cost. A codebase with high average complexity will generate more incidents, take longer to change and cost more to maintain than one with low complexity, all other things being equal.

Reducing complexity is achievable incrementally without stopping feature development. The key is targeting the highest-complexity functions in the highest-risk areas of the codebase, applying small extract-and-test cycles, and integrating complexity thresholds into the CI pipeline to prevent new accumulation.

Eden Technologies measures cyclomatic complexity as a core component of every codebase assessment. It is one of the most consistent predictors of where clients will see delivery problems and incident risk.

Does your codebase have these problems? Let’s talk about your system