30th Sep 21 2:55 pm

Security Theater: When is a “Critical” Really a Critical?

September 30, 2021 | By: Alex Hewko

What is Security Theater?

If you’re in the security industry, or you’ve dealt with security tools in recent years, you may have heard of the term “security theater.” This term, first coined by Bruce Schneier, describes security measures which make us feel like we’re doing more for our security than we actually are. Bruce’s inspiration came from the United State’s Transport Agency Services (TSA). As described in his essay, what appeared to be a complete network of security measures in airports still did not prevent a terrorist attack from occurring. 

While Bruce’s example looks at the extreme end of risk that comes from security measures that aren’t all that secure, companies should also consider how security theater may affect their own applications or infrastructures. Understandably, this can be a big stressor for CTOs and CISOs, who ‘don’t even know what they don’t even know.’

The aim of this article is to identify areas where security theater often occurs and prepare the reader to critically examine their own security tools and processes for possible security theater.

Impact of Security Theater

Knowing why we need to look for security theater in an application is a critical step towards prioritizing and accurately implementing security in an organization. Here are three key ways in which security theater can impact an application’s security:

  • Lack of prioritization of application security leads to development teams wasting time on work that won’t move the needle sufficiently. In other words, it’s going to take more time to actually come back and patch and remediate the issues than it would be to correctly implement security solutions earlier in the process.
  • Lack of trust with application security procedures can leave the development team feeling unsure whether to believe in the results or not. As we’ll discuss later in the article, wrongly identifying a severity level can lead to team confusion and ‘boy who cried wolf’ effect.
  • Lack of alignment between the security and business goals will leave security gaps. The goal of security is to provide “guard rails” rather than “speed bumps.” Rather, security teams are guiding the application in the right direction, rather than slowing it down. Unfortunately, many organizations have trouble communicating the importance of security to their upper management, and therefore the security of their application suffers.

Internal Security Theater

Many times, security theater can come from within. Examples include:

  • Overstating the risk. The lack of calibration on an industry standard leaves reporters open to interpret the risk more broadly, and leaves room for difference of opinion.
  • Calculating the risk using the worst case scenario. While this might be needed in business logic related scenarios, this can’t be applied to every single vulnerability.
  • Fixing vulnerabilities without real business scenarios or evidence. Not every vulnerability needs evidence to be fixed. Though, in most cases, understanding the impact and probability (through evidence) would help make a case to spend time fixing the vulnerability.
  • Pretending like old practices still work. As you build out your systems, you need to consider the changing impact on security and functionality. Like the image below, you can’t rely on the “but, we’ve always done it that way!” and still expect to improve your application’s security. 
Always Done It That Way

Security Theater in PenTesting

Something we’ve often heard from past clients is that they’ve received inaccurate results from past penetration testing providers. In these reports, it was not that the results themselves were false, as the issue identified usually does actually exist. The inaccuracy lies in mis-assigning a severity level to a vulnerability.

Some firms believe it’s a good business practice to always provide critical issues, to show that their business is valuable at minimizing risk. However, mis-assigning issues is a huge red flag, and can leave your application or infrastructure more vulnerable or inefficient than before the pentest. 

As another case scenario, a disreputable pentesting firm may not spend enough time understanding the application’s unique business logic to correctly identify the exact risk of the attack surface.

What’s The Risk in Wrongly Assigning Severity Levels?

Assigning It Higher Than It Actually Is

  • Slows down developer time to stop shipping code to fix the vulnerability. While it is more efficient to fix all vulnerabilities before launch, not all vulnerabilities need to halt code deployment.
  • The “boy who cried wolf effect.” if we call everything critical even when it’s not, how will we know what’s really leaving a security gap in our application?

Assigning It Lower Than It Actually Is

    • Risk of security gap. Leaving your application vulnerable for longer than necessary leaves you exposed for more time.
    • Creation of a false sense of security. Understating the impact of a vulnerability could leave an issue unresolved for too long and/or lead to compliance issues.

How To Identify Correctly Assigned Severity Levels

DREAD scoring was introduced by Microsoft in 2008, and has been recognized as one of the two most common mechanisms to measure risk of a security threat, alongside CVSS.

Knowing what the industry standards are for correctly assigned severity levels helps minimize your chances of being a victim of security theater. The breakdown below will help you to better identify if your pentest report is of a high quality, and if the results are accurately assigned.

Most firms break down vulnerabilities into 5 categories. These categories are based on unique combinations of risk and impact, and are based on the two models above, as well as OWASP’s Risk Rating Methodology. Vulnerabilities can be categorized on a multitude of factors such as impact, report confidence, likelihood of occurring, and complexity in remediation, among others.

Critical Issues

These vulnerabilities have high impact, high risk, and are easily exploitable. Stop shipping and address these issues immediately. At minimum, a temporary “band-aid” solution should be implemented to prevent exploitation. Typically, the exposure is easily identified and successful exploitation is trivial. Fix before release or within 2 weeks

Examples include:

  • SQL Injection
  • Remote Code Execution
  • Command Injections
  • Impersonation
  • Unauthorized Administrative Host/Application Access

High Issues

These vulnerabilities have high impact and are relatively easy to exploit. Like critical vulnerabilities, a temporary “band-aid” solution should be implemented right away to prevent exploitation. Successful exploitation may require either user-level access to the system or application, or exploits may not be publicly available. Fix within 14-30 days.

Examples include:

Medium Issues

These vulnerabilities have moderate impact and are relatively easy to exploit. Issues in this level are typically leveraged in conjunction with one or more security issues to compromise the system or application. Fix within 90-180 days. 

Examples include:

  • Failures in Error Handling
  • Failures in Auditing or Logging

Low Issues

These vulnerabilities have low impact and are more difficult to exploit. Items can be addressed at the discretion of system owners, depending on company or client risk-acceptance policies. Fix within 180-270 days.

Examples include:

  • General Information Leakage

Informational Issues

These vulnerabilities have very little impact and are difficult to exploit successfully. Fix at developer’s discretion or by customer requirement only.

Examples include:

  • Vulnerabilities in Database Libraries
  • Lack of Content Security Policy

Choosing a Reputable Firm

A reputable firm will always provide unbiased risk. This means that sometimes you may receive a report that just contains a series of informational risks, for example. This doesn’t necessarily mean the firm did a poor job at finding critical vulnerabilities. It just means that your development team already did a good job at securing your code, so there are less vulnerabilities to find. The reputable firm just provides you with the confidence that your code is secure.

A trustworthy firm will also provide a detailed report that explains how they found the issue, with substantial evidence to backup their findings. You should be able to question any of their findings, and they should have a confident response as to the “why” they chose that severity level. At Software Secured, we also include screenshots and a step-by-step guide to help you replicate the issues yourself, to fully understand what we’ve found.

Having the knowledge of what security theater looks like and the confidence to critically examine your pentest results helps you earn the best results possible.

We help DevOps teams at SaaS companies to build confidence in their application security.
Discover PTaaS

Was this article helpful?

Share This Post

Leave a Reply

Your email address will not be published.

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Post
25 October 2021 | By: Alex Hewko
Speed or Security? How to Securely Scale DevOps
READ MORE
19 August 2021 | By: Alex Hewko
The Benefits of Empowering Your Team’s Security Champion(s)
READ MORE
23 April 2019 | By: Olivia Harris
Open Source Security Tools to Complete Your Software Development Life Cycle
READ MORE