Software Secured Company Logo.
Services
Services
WEB, API & MOBILE SECURITY

Manual reviews expose logic flaws, chained exploits, and hidden vulnerabilities

Web Application Pentesting
Mobile Application Pentesting
Secure Code Review
Infrastructure & Cloud Security

Uncovers insecure networks, lateral movement, and segmentation gaps

External Network Pentesting
Internal Network Pentesting
Secure Cloud Review
AI, IoT & HARDWARE SECURITY

Specialized testing validates AI, IoT, and hardware security posture

AI Pentesting
IoT Pentesting
Hardware Pentesting
ADVANCED ADVERSARY SIMULATIONS

We simulate attackers, exposing systemic risks executives must address

Red Teaming
Social Engineering
Threat Modelling
PENETRATION TESTING AS A SERVICE

PTaaS provides continuous manual pentests, aligned with release cycles

Penetration Testing as a Service
OWASP TOP 10 TRAINING

Practical security training strengthens teams, shifting security left effectively

Secure Code Training
Ready to get started?
Identify real vulnerabilities confidently with zero-false-positive penetration testing
Learn More
Industries
Industries
INDUSTRIES
Data and AI

AI pentesting uncovers adversarial threats, ensuring compliance and investor trust

Healthcare

Penetration testing protects PHI, strengthens compliance, and prevents healthcare breaches

Finance

Manual pentests expose FinTech risks, securing APIs, cloud, and compliance

Security

Penetration testing validates SecurTech resilience, compliance, and customer trust

SaaS

Pentesting secures SaaS platforms, proving compliance and accelerating enterprise sales

CASE STUDY

“As custodians of digital assets, you should actually custodize assets, not outsource. Software Secured helped us prove that our custody technology truly delivers on that promise for our clients in both the cryptocurrency and traditional finance”

Nicolas Stalder,
CEO & Co-Founder, Cordial Systems
Ready to get started?
Our comprehensive penetration testing and actionable reports have 0 false positives so you can identify
Learn More
Compliance
Compliance
COMPLIANCE
SOC 2 Penetration Testing

Pentesting validates SOC 2 controls, proving real security to auditors and customers

HIPAA Penetration Testing

Manual pentesting proves HIPAA controls protect PHI beyond documentation

ISO 27001 Penetration Testing

Pentests uncover risks audits miss, securing certification and enterprise trust

PCI DSS Penetration Testing

Pentesting validates PCI DSS controls, protecting sensitive cardholder data

GDPR Penetration Testing

GDPR-focused pentests reduce breach risk, regulatory fines, and reputational loss

CASE STUDY

“Software Secured’s comprehensive approach to penetration testing and mobile expertise led to finding more vulnerabilities than our previous vendors.”

Kevin Scully,
VP of Engineering, CompanyCam
Ready to get started?
Our comprehensive penetration testing and actionable reports have 0 false positives so you can identify
Learn More
PricingPortal
Resources
Resources
COMPLIANCE
Blogs
Case Studies
Events & Webinars
Partners
Customer Testimonials
News & Press
Whitepapers
cybersecurity and secure authentication methods.
API & Web Application Security Testing

Attack Chains: The Hidden Weakness in Modern API & Web Application Security

Alexis Savard
November 21, 2025
Ready to get started?
Our comprehensive penetration testing and actionable reports have 0 false positives so you can identify
Learn More
Login
Book a Consultation
Contact
Blog
/
API & Web Application Security Testing
/

Static Application Security Testing (SAST): The Good, the Bad, and the Ugly

Static Application Security Testing (SAST) promises early vulnerability detection directly from source code. But how effective is it in practice? This guide explores where SAST tools excel, where modeling and rule-based detection break down, and what security and engineering teams should realistically expect from static analysis.

By Sherif Koussa
・
5 min read
Table of contents
Text Link
Text Link

Static Application Security Testing (SAST) refers to tools that analyze source code to identify potential security vulnerabilities without executing the application. In simple terms, one application analyzes another and flags suspicious patterns for review. SAST promises early detection, automation, and scalability. In practice, it delivers real value but also real frustration.

Let’s take a sober look at what SAST does well, where it struggles, and what engineering teams should realistically expect from it inside a modern SDLC.

What SAST Is (And How It Actually Works)

At its core, a SAST tool performs two major functions:

  1. Modeling the software
  2. Applying rules to detect vulnerability patterns

Modern SAST engines invest heavily in modeling code. That modeling may include:

  • Lexical analysis
  • Abstract Syntax Trees (AST)
  • Data flow graphs
  • Interprocedural call graphs
  • Taint tracking across execution paths

On top of that model, vendors create rule sets designed to detect common vulnerability classes, typically aligned to benchmarks such as:

  • OWASP Top 10
  • SANS Top 25 Software Errors

In theory, this gives us structured, repeatable vulnerability detection directly from source code before deployment.

‍The Good

There’s a reason SAST remains a staple in secure SDLC pipelines.

Good SAST tools are designed to run statically against source code and identify vulnerability patterns. This process is extremely efficient, as analysis can be done offline using only the source code. The SAST analysis process, in general, can be broken down into two distinct steps: modeling the software program and creating rules to detect patterns within the model. 

The SAST industry has employed and invested significant engineering effort in modeling software from lexical analysis, Abstract Syntax Tree (AST), data flow graphs, to full-blown program call graphs. In addition, they have invested even more resources in generating rules to pattern-match these models against common vulnerability types (ex. OWASP Top 10, SANS Top 25)

‍The Bad

Anyone who has used a modern-day static code analysis tool will tell you that these rules are not perfect. In the SAST industry, there are basically two measures of the effectiveness of a SAST tool: the precision (notice how it doesn’t say accuracy) and recall of that tool against known benchmarks [give benchmarks] created by the industry and your own code base.  The informal definition of the two states:

‍

In pattern recognition, information retrieval, and binary classification, precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances. At the same time, recall (also known as sensitivity) is the fraction of relevant instances retrieved over the total number of relevant instances. Both precision and recall are therefore based on an understanding and measure of relevance.

In practical terms, if an application has 100 actual bugs in the system, recall is the number of those 100 bugs that were reported. Precision is the number of true positives divided by the total number of true positives and false positives. The problem with SAST tools today is that an initial scan of a virgin, medium-sized software project could return thousands of reported issues.  This brings us to the first natural set of questions asked:

  1. Are there really that many bugs within my program? 
  2. What is the precision of these findings reported? 
  3. What are the assurances that recall (the software’s coverage of my application) is high? 

‍The Ugly

The ugly answer to the above question is that there is no assurance. But why? The answer boils down to two dimensions of how a SAST tool works: modeling and pattern recognition rules. So let's dig a little deeper into why modelling could pose a problem. SAST modelling is limited in many respects because it only accounts for the application's source code. Any security professional will tell you that vulnerabilities can be exposed not only by the application but also by the system, the application configuration, and how that system is deployed in a production environment. 

Additional technologies that involve data flows make it difficult to model proper data flow paths. Take, for example, a user posting on Twitter. The user enters text into a form and sends it via an HTTP POST request to the Twitter server. That tweet gets processed and inserted into a database of user/tweet records. In another portion of the code, those tweets are retrieved from the database and processed, pushing their content to another client that requests specific users’ tweet information.  These two call flows are disconnected due to the database technology. The question arises: Did the insertion of the tweet data into the database get sanitized? Didn’t it get sanitized for the technology of the consumer of that data? Will the insert have to sanitize against HTML, JSON, or XML? After retrieving that information, has it been sanitized from the database before sending it to the remote client? Has the server serving that information properly sanitized the data for the display technology's context? There are, however, only a limited number of ways to insert data into a database and to fetch and sanitize that data for consumption by a client. A more difficult task to model is server setup and configuration. There is no clear visibility into the source code to understand how a server is hardened and its policies for different use cases.  Another difficult task to model is access control.  In access control, the way the system authenticates a user can be implemented in an infinite number of ways, depending on the product's business requirements. From a vulnerability rules standpoint, it is at best a best-effort approach to detect business rule violations and vulnerabilities.

  1. Describe pattern recognition accuracy based upon the SAST vendor benchmarks vs a specific software project.
  2. The nature of your software projects and how often or rarely that software touches third-party tools, and how that affects your SAST results.
  3. Third-party tools and their addition of vulnerabilities to your code base
  4. The education aspect of security for your developer teams

Listed below are vulnerability categories where SAST tools are better, as well as vulnerabilities where SAST tools are not so great.

  • Cross-site scripting
  • SQL Injection
  • Cross-Site Request Forgery
  • Insecure logging
  • 3rd Party Dependencies
  • Information Leakage
  • XML External Entity Injection
  • Insecure deserialization

Not so good at:

  • Insecure Access Control
  • Cross-site origin sharing
  • Insecure direct object reference

How Teams Can Make SAST More Effective

Instead of asking “Is SAST good or bad?”, the better question is:

How do we reduce friction and increase signal?

1. Calibrate Rulesets

Disable noisy categories.
Tune thresholds.
Focus on high-confidence findings first.

2. Treat It as Hygiene, Not Assurance

SAST improves baseline hygiene.
It does not prove your system is secure.

3. Invest in Developer Education

Many repeated findings stem from:

  • Misunderstood frameworks
  • Improper input handling
  • Unsafe dependency use

Better education reduces findings upstream.

4. Combine With Other Controls

SAST works best as part of a layered approach:

  • SAST
  • Dependency scanning
  • DAST
  • IaC scanning
  • Code review
  • Runtime monitoring

Security is multi-dimensional. SAST covers one dimension.

Final Thoughts

SAST excels at detecting repeatable, pattern-based vulnerabilities in source code.It struggles with context-heavy, configuration-driven, and business-logic flaws.Its effectiveness depends heavily on precision, recall, and how closely your code aligns with vendor benchmarks.

Used thoughtfully, SAST reduces risk and improves hygiene.
Used blindly, it creates noise and false confidence.

The real value of SAST isn’t that it makes your application secure. It’s that it makes insecure patterns harder to ignore early on.

And in modern software development, early matters.

‍

About the author

Sherif Koussa

Sherif Koussa is a cybersecurity expert and entrepreneur with a rich software building and breaking background. In 2006, he founded the OWASP Ottawa Chapter, contributed to WebGoat and OWASP Cheat Sheets, and helped launch SANS/GIAC exams. Today, as CEO of Software Secured, he helps hundreds of SaaS companies continuously ship secure code.

Continue your reading with these value-packed posts

Vulnerability Management & Scoring

Why Common Vulnerability Scoring Systems (CVSS) Suck

Warren Moynihan
Warren Moynihan
12 min read
December 5, 2022
Open-source Intelligence (OSINT).
API & Web Application Security Testing

Protecting Your Organization With Open-source Intelligence (OSINT)

Omkar Hiremath
Omkar Hiremath
9 min read
March 15, 2023
Vulnerability Management & Scoring

When is It Okay to Accept Software Risk?

Shimon Brathwaite
Shimon Brathwaite
7 min read
February 21, 2023

Get security insights straight to your inbox

Helping companies identify, understand, and solve their security gaps so their teams can sleep better at night

Book a Consultation
Centralize pentest progress in one place
Canadian based, trusted globally
Actionable remediation support, not just findings
Web, API, Mobile Security
Web App PentestingMobile App PentestingSecure Code Review
Infrastructure & Cloud Security
External Network PentestingInternal Network PentestingSecure Cloud Review
AI, IoT & Hardware Security
AI PentestingIoT PentestingHardware Pentesting
More
PricingPortalPartnersContact UsAbout UsOur TeamCareers
More Services
Pentesting as a ServiceSecure Code Training
Industries
Data and AIFinanceHealthcareSecuritySaaS
Compliance
GDPR PentestingHIPAA PentestingISO 27001 PentestingPCI DSS PentestingSOC 2 Pentesting
Resources
BlogsCase StudiesEvents & WebinarsCustomer TestimonialsNews & PressWhitepapers
More
PricingPortalPartnersContact UsAbout UsOur TeamCareers
Resources
BlogsCase StudiesEvents & WebinarsCustomer TestimonialsNews & PressWhitepapers
Security & CompliancePrivacy PolicyTerms & Conditions
2026 ©SoftwareSecured