Forgivable vulnerabilities: A kinder, gentler approach to cybersecurity?
The NCSC argues that some bugs deserve clemency, whilst there's simply no excuse for leaving others unmitigated.

The UK's National Cyber Security Centre (NCSC) recently set out a new, simplified classification system for assessing vulnerabilities.
Its approach focuses on the concept of "unforgivable" vulnerabilities, which Steve Christie introduced in 2007 in a paper for MITRE.
"They are beacons of a systematic disregard for secure development practices," Christie wrote. "They simply should not appear in software that has been designed, developed, and tested with security in mind."
The NCSC went further by declaring that some vulnerabilities are "forgivable" - a stance that may seem surprising in an era when the number of CVEs is soaring to record levels, with the number of disclosed vulnerabilities forecast to hit as many as 50,000 this year.
So is it time that defenders took a more charitable attitude to assessing vulnerabilities?
What is a forgivable vulnerability?
The NCSC said that a bug deserves clemency when it is "subtle," expensive to mitigate, or poorly understood. This classification also applies if the technical implementation of the mitigation relies on "too many (or too complex) prerequisites."
An unforgivable vulnerability is one which can be implemented cheaply, is fully documented or easily mitigated without undue complexity.
But can any flaw really be described as forgivable? We asked Sylvain Cortes, VP Strategy at Hackuity, and software engineer, Eugene Rojavski, Security Research Group Manager at Checkmarx, for their views.
Checkmarx's Rojavski pointed out that the paper “raises important discussions around prioritisation", although he argued that the binary classification oversimplifies matters.
Rojavski said: “According to the NCSC’s methodology, ‘unforgivable’ vulnerabilities are those that could have been easily mitigated through common practices like input validation and output encoding. These two strategies, however, would apply to many vulnerabilities, meaning the developer isn’t much closer to an effective list of the most to least critical vulnerabilities.
"In the CWE Top 25 Most Dangerous Software Releases referenced by the NCSC, for example, 14 out of 25 vulnerabilities can be categorised as ‘unforgivable’ using these techniques. Additionally, some mitigation measures, such as those under 'Libraries and Frameworks,' often rely on libraries that handle output encoding automatically, as seen in the case of XSS prevention.”
The paper also recommends that teams adopt more secure coding languages. Although this approach offers many advantages over C++, such as memory safety, Rojavski warned of the practical challenges of shifting away from the most common programming languages.
“The NCSC's suggestions are good on paper," he added. "Although newer coding languages, like Rust offer significant security benefits over other more commonly used language options such as C++, the reality is that implementation of these is far more complicated.
"Transitioning to Rust at scale would be extremely complex, as the majority of developers write in C++ - one of the most common languages used in software development across hardware, gaming, graphics-intensive applications, and other industries.
“An industry-wide transition would require time, investment and developer upskilling, and wouldn’t be an instant fix to combat the growing number of vulnerabilities actively being exploited by threat actors.”
Forgiving the unforgivable
Hackuity's Cortes welcomed the new classification system.
“The NCSC has hit the nail on the head here," he said. “Unfortunately, many new products still don’t have security built in as vendors focus on pushing new products quickly to meet market demands.
"Security is still seen as an afterthought to software development, but it needs to be embedded into every stage of the development lifecycle.”
To truly bake in security, both experts agreed that security checks should be implemented at every stage of the development lifecycle:
'A truly secure by design mindset requires concrete actions like implementing SAST (Static Application Security Testing) and DAST (Dynamic Application Security Testing) to identify vulnerabilities both during and after code is developed,” Cortes said.
He pointed out that modern software rarely consists of 100% original code, meaning that developers often turn to open-source components to form the backbone of their products. Whilst this can slash hundreds of hours of time from the development process, there are clear risks.
“Third-party open-source code has almost never been vetted,” he said. “This allows anyone to insert malicious backdoors into the code that can be exploited at a later date. Take the Log4J incident, for example.
"Large numbers of software vendors used this code as a baseline for their product, only to be affected by a mass global exploitation of the code later down the line. Vendors should rigorously vet third-party code if its use cannot be avoided but, where possible, develop their own code."
But will leaders, voters, shareholders, or anyone else impacted by a vulnerability's aftereffects ever simply excuse the defenders who failed to patch it? That remains to be seen. Whilst the NCSC's classification might have its heart in the right place, it's far from clear whether vulnerabilities will ever truly be regarded as forgivable.
Have you got a story or insights to share? Get in touch and let us know.