7 Most Popular Free Vulnerability Scanner Tools 2019 [Free & Paid]

vulnerability scanner

Vulnerability scanning aims to reveal security weaknesses in an application by using automated tools to assess its code, design, and functionality. Design flaws which lead to vulnerabilities like Cross Site Scripting (XSS), SQL Injection, path disclosure, and other vulnerabilities found in the OWASP Top 10.

The Vulnerability Scanner Landscape

Understanding what vulnerabilities exist and identifying those relevant to your application will be the first step in implementing vulnerability scanning practices. The OWASP Top 10 is an excellent resource that will call your attention to a short list of established threats. Integrating additional lists like the CWE/SANS Top 25 will help fill gaps and provide a complete vulnerability mitigation strategy.

OWASP Top 10 2017

  • A1 Injection
  • A2 Broken Authentication
  • A3 Sensitive Data Exposure
  • A4 XML External Entities (XXE) [NEW]
  • A5 Broken Access Control
  • A6 Security Misconfiguration
  • A7 Cross-Site Scripting (XSS)
  • A8 Insecure Deserialization [NEW]
  • A9 Using Components with Known Vulnerabilities
  • A10 Insufficient Logging & Monitoring [NEW]

Source: OWASP Top 10 2017

Organizational Considerations

The vulnerability scanner selection process begins by identifying organizational requirements which can be divided into four broad categories: cost, usability, update frequency, and support.

  1. Cost: A vulnerability scanner’s cost can be subdivided divided into initial and operational costs. Initial costs include the cost of the software and additional hardware, training, personnel, or resources that its implementation might entail. Ongoing costs would include operational expenses such as licensing fees, and ongoing training.
  2. Usability: The amount of effort, training, and skill a tool requires to be used effectively will describe its usability. For example, a small agile project might not have the skilled personnel available to properly employ a complex vulnerability scanner and would be better served by a more accessible option.
  3. Update Frequency: The quantity and quality of updates to the application and the rulesets it uses to identify the most recent vulnerabilities are an important considerations when choosing a scanner. For example, if a vulnerability scanner is no longer receiving timely updates, it would be advantageous to know this before investing heavily into its implementation.
  4. Support: The state of a scanner’s documentation and the channels through which support is available should also be considered. Discussion forums, email, and phone support become especially important as the complexity of a scanner increases.

Vulnerability Scanner Reviews

Qualys | On-premise 

Strengths

Dislikes

  • Protects whole IT system
  • Can easily understand vulnerabilities 
  • Nice reports
  • User friendly
  • Training goes a long way
  • Powerful tool to keep track of all types of web systems 
  • doesn’t read inside Docker containers
  • Web-App Vulnerability Scanner had problems logging in with one-page JS applications
  • Not well suited for modern technology
  • Quality of security posture
  • False positives
  • Could be automated
  • Can take a long time to complete a scan
  • Cloud Agents work sometimes, multiple issues with data not processing or getting stuck in queue

Rapid 7 Nexpose | On-premise

Strengths

Dislikes

  • Easy to install
  • Intuitive UI
  • Full range of basic penetration testing phases
  • Reports are well presented with relevant information
  • If a vulnerability is announced, Tenable releases a 
  • plugin for it within hours
  • Great customer support
  • Multiplatform scan
  • Need to be rather technical to get up and running
  • Price is rather high
  • Customer support is a bit slow

Nessus | On-premise 

Strengths

Dislikes

  • Good for compliance checks
  • Good plugin based checks
  • Support for scanning several devices ( routers, switches, firewalls, Endpoints, etc.)
  • Great UI
  • 24/7 support
  • Reports could be better
  • Scanning through firewall creates a few false positives
  • Price is high
  • Not a good option for a web application scanner
  • Reports lack data such as how to detect and prevent some incidents

Acunetix WVS | On-premise 

Strengths

Dislikes

  • User interface is easy to use
  • Reports are informative 
  • Scanning process is fairly easy to set up
  • Coverage of vulnerabilities is great
  • Technical support team was not helpful
  • Scans take a while since it explores all parts 
  • Tends to consume a lot of hardware resources

BurpSuite | On-premise  & SaaS

Strengths

Dislikes

  • Fast and easy to set up 
  • It is enough to test almost all security related vulnerabilities
  • Great tool for pentesting work, with customization 
  •  It allows Spidering the website: both manually and automatically
  • can act as Man in the Middle (MITM) and help you change the GET and POST requests
  • Often gives false positives
  • User interface is not great 
  • Hard to link app with system web browser apps

AppScan| On-premise 

Strengths

Dislikes

  • Real time agent status monitoring
  • Cost effective for its performance and features
  • Generates accurate results based on inputs
  • Has advanced configuration options for testing a broad range of cases
  • It is easy to configure
  • Broad range of testing
  • Alerts of possible threats are good
  • IBM AppScan Standard doesn’t offer SCA, it is limited to Enterprises only
  • No support for Oracle fusion middleware stack scanning
  • It is not tailored to different frameworks
  • Enterprise management requires the purchase of additional AppScan products
  • Sometimes gives fewer results when the number of tests performed increases

Contrast | On-premise  & SaaS 

Strengths

Dislikes

  • Delivers fast results
  • Easy to automate and integrate security testing into CI/CD
  • Also works with open source/3rd party framework
  • Easy for developers to run scans
  • Security dashboard with real time metrics
  • Low false positive ratio
  • Dependent on tech stack (Java, Node js, Python & .Net.)
  • Price is high for enterprises
  • Missing web layer vulnerabilities detection

Technical Considerations

Technical requirements will eventually come to influence the selection process. The Web Application Security Scanner Evaluation Criteria’s (WASSEC) objective is to create vendor-neutral guidelines focusing on technical considerations to help security professionals choose the best scanner and also compliance requirements like those of the Payment Card Industry Data Security Standard (PCI-DSS). WASSEC has published a detailed document to this end describing an ideal scanner evaluation which outlines eight categories, listed below.

WASSEC Scanner Evaluation Categories

  1. Protocol Support
  2. Authentication
  3. Session Management
  4. Crawling
  5. Parsing
  6. Testing
  7. Command + Control
  8. Reporting

Black Box Testing

Black box scanners evaluate an application’s security through automated language agnostic functionality assessments. The following list outlines code design techniques black box testing evaluates.

Black Box Testing Techniques

  1. Decision Table Testing: “Decision … tables associate conditions with actions to perform”
  2. All-pairs testing: “a combinatorial method of software testing that, for each pair of input parameters to a system (typically, a software algorithm, tests all possible discrete combinations of those parameters.”
  3. Equivalence Partitioning: “a software testing technique that divides the input data of a software unit into partitions of equivalent data from which test cases can be derived.”
  4. Boundary value analysis: “a software testing technique in which tests are designed to include representatives of boundary values in a range.”
  5. Cause-effect graph: “a directed graph that maps a set of causes to a set of effects.”
  6. Error Guessing: “a test method in which test cases used to find bugs in programs are established based on experience in prior testing.”
  7. State Transition Table: “a table showing what state (or states in the case of a nondeterministic finite automaton) a finite semiautomaton or finite state machine will move to, based on the current state and other inputs.”
  8. Use Case Testing: “a list of actions or event steps, typically defining the interactions between a role … and a system, to achieve a goal.”
  9. User Story Testing: “an informal, natural language description of one or more features of a software system.
  10. Domain Analysis: “the process of analyzing related software systems in a domain to find their common and variable parts.”

Source: Black Box Testing (https://en.wikipedia.org/wiki/Black-box_testing)

Considerations

The implementation of vulnerability scanning processes will require a strategic approach. To start, scanning tools should be calibrated once a list of relevant vulnerabilities has been compiled. Calibration will ensure your scanning tool is capable of detecting the vulnerabilities being sought within your application and its environment.

Employing multiple vulnerability scanners and creating multiple scanning profiles for each will help minimize blind spots, establishing stronger security practices. Furthermore, documenting scanner attributes like software versions, configurations, and environments will increase the value of reports generated by scanning practices. Lastly, rating the performance of each scanner used by your organization will improve future scanning efforts, allowing you to quickly select and employ the best scanner for an application and its specific environment.

In all cases, it should be noted that attackers will have access to the same scanners you use to analyze your applications giving you an opportunity to locate vulnerabilities before an attacker has the opportunity to exploit them. However, relying entirely on publicly available vulnerability lists, tools, and their default configurations will leave your application open to attack from threats not popular enough to make lists like the OWASP’s Top 10 or detected by default configurations creating a blind spot in your security coverage. Furthermore, the perpetual existence of unknown vulnerabilities ensures that a blind spot will be a constant fixture within your application security landscape. Consequently, employing a strategy which uses custom rules, integrates application specific plugins, and remains vigilant for new threats will give you a strong chance of mitigating not only the threats you know exist but those you aren’t yet aware of.

Final Thoughts

Choosing the right scanner will require identifying project objectives, scanner requirements, and also organizational requirements such as price, usability, and support. Vulnerability scanning is an essential component of application security efforts and its ability to analyze an application’s functionality, code, and structure with the help of both white and black box testing will give application security teams a unique perspective by which security can be improved.

Vulnerability Scanning Resources

Was this article helpful?

Introduction to SQL Injection Mitigation

What is SQL Injection?

The popularity of Structured Query Language (SQL) injection attacks has grown significantly over the years and employing relevant mitigation practices will help your application from being added to a growing list of insecure applications implicated in significant data breaches. Despite its release nearly 30 years ago, SQL injection has been responsible for millions of lost records with damages also in the millions, earning itself the #1 rung in the 2017 OWASP Top 10. In 2008 one such attack occurred when the Heartland Payment System (HPS) was compromised as a result of an SQL Injection vulnerability which exposed 138 million credit cards and caused $140 million in damages. Secure use of SQL could have prevented this.

SQL injection is an attack that occurs when specifically constructed input can provoke an application into misconstructing a database command, resulting in unforeseen consequences. Those consequences can include the circumvention of authentication and authorization mechanisms allowing the attack to add, modify, delete, and retrieve records compromising the integrity of a database and the applications it provisions.

Cultivating an environment which enables secure coding practices that prevent SQL injection vulnerabilities from making their way into an application is possible. Although this article focuses on outlining the technical practices which can prevent injection attacks, developing an organization-wide security minded culture should also be an objective. Education, design, and code review are just a few of the components within the Software Development Life Cycle (SDLC) that will contribute to an application’s ability to successfully defend itself from SQL Injection attacks and its overall success.

SQL Injection Mitigation Strategies

Secure Coding & SDLC

Security driven programming practices will always be the best defense against SQL Injection attacks. Ensuring developers are aware the the risks, tools, and the techniques which can mitigate SQL vulnerabilities is your first and best line of defense. However, cultivating the use of secure programming techniques will also require a commitment to their implementation throughout the SDLC. Developing security minded education, planning, testing, and review practices are just a few components within an SDLC that will help prevent SQL Injection vulnerabilities from making their way into your application.

Input Validation & Sanitation

Client side input sanitization and validation should only be considered a convenience for the end user, improving their user experience. For example, it could prove useful to provide feedback on a proposed username, indicating whether or not it will meet the application’s criteria. However, client side sanitization and validation can be bypassed, and as such, server side solutions should be employed.

Server side sanitization and input validation ensures data supplied by the user does not contain characters like single or double quotes that could modify an SQL query and return data not originally intended in the application’s design. Specifically, validation makes sure that user supplied data satisfies an application’s criteria while sanitization refers to the process which modifies user input in order to satisfy the criteria established by validation. Combining both results in a scenario where single quotations contained within a user submitted string are modified or removed as a result of sanitization and then validated ensuring single quotations are no longer present satisfying the application’s requirements.

Stored Procedures & Parametrization

Query parameterization occurs when stored procedures, defined as sub-routines an application uses to interact with a relational database management system (RDBMS), employ variable binding. Variable binding is a process which requires the definition of an SQL statement prior to the insertion of variables allowing for a clear delineation of code and user input. Essentially, prepared statements which parametrize queries protect the intent of an SQL query. For example, if an attacker were to submit the string ‘ OR ‘1’=’1 a prepared statement would literally attempt to match the string ‘ OR ‘1’=’1 to a field in the database, rather than evaluating the boolean expression.

// Stored Procedure Example (Java)
// Source: https://www.owasp.org/index.php/SQL_Injection_Prevention_Cheat_Sheet

String custname = request.getParameter("customerName"); // This should REALLY be validated
try {
     CallableStatement cs = connection.prepareCall("{call sp_getAccountBalance(?)}");
     cs.setString(1, custname);
     ResultSet results = cs.executeQuery();		
     // … result set handling 
} catch (SQLException se) {			
     // … logging and error handling
}

Prepared Statements

Prepared statements are essentially serverside parameterized queries and when used with secure coding techniques, can produce equally secure code. Simply, the construction of a secure prepared statement results in the automatic parametrization of user input. However, the use of dynamically constructed queries should be avoided unless special libraries and techniques are used to protect against gaps in security coverage that might emerge. Libraries like opaleye for Haskell and SQL Builder for Python can be used to this effect. However, if dynamic SQL must be used proper sanitization and validation of user input will be necessary to safeguard an application.

The following code example uses a PreparedStatement, Java’s implementation of a parameterized query, to execute the same database query.

// Prepared Statement Example (Java)
// Source: https://www.owasp.org/index.php/SQL_Injection_Prevention_Cheat_Sheet

String custname = request.getParameter("customerName"); // This should REALLY be validated too
// perform input validation to detect attacks
String query = "SELECT account_balance FROM user_data WHERE user_name = ? ";
PreparedStatement pstmt = connection.prepareStatement( query );
pstmt.setString( 1, custname);
ResultSet results = pstmt.executeQuery( );

Program Analysis Techniques & Proxies

Although the creation of secure SQL code should be your first priority, there exist tools that can facilitate the process. SQL specific static analysis tools like SQL Code Guard (https://www.red-gate.com/products/sql-development/sql-code-guard/) can analyze a database and its queries against a set of rules to reveal vulnerabilities. Other tools like SQLProb operate between an application and its database acting as a proxy which intercepts, analyzes, and discards potentially malicious queries before they reach the database. It should be noted that tools which rely on predefined patterns to identify malicious queries become less effective as their rulesets age and new attacks are discovered.

Final Thoughts

With the relative ease of executing SQL injection and their compelling consequences, their mitigation should be a priority for all application developers. With numerous defensive techniques and tools available in the form of stored procedures, parametrization, program analysis techniques, and even black box tools like OWASP’s Zed Attack Proxy, developers should have no trouble improving the security of their application and its users against SQL Injection attacks.

Recommended Reading

Secure Scrum – Integrating Security with Agile

Successfully implementing strong application security is one of the most challenging non-functional tasks Scrum teams face.Traditional application security practices which carefully integrate security throughout the Software Development Lifecycle (SDLC) are often at odds with Scrum methodology which favors responsive development cycles that quickly produce working code. To unite the strengths offered by Scrum with the necessity of security, professors from the Munich IT Security Research Group modified Scrum allowing for the successful integration of security within the Agile framework while minimizing changes to the original Scrum methodology.

The Secure Scrum Methodology

Figure 1 – Integration of Secure Scrum components into standard Scrum. Christophe Pohl and Hans-Joachim Hof, 2015.
  1. Identification is the process which diagnoses potential security concerns throughout the application development process. Practically, this is accomplished by marking items in the Product Backlog when security concerns are discovered. The identification process occurs during Product Backlog creation, Product Backlog Refinement, Sprint Planning, and Sprint Review.
  2. Implementation is the process which ensures security concerns are properly understood by the development team and is carried out during Sprint Planning and Daily Scrum meetings.
  3. Verification evaluates an application’s security is run during Daily Scrums.
  4. Definition of Done represents a criteria which establishes the threshold for completion with regards to resolving an application security concern. Source: (Christoph Pohl and Hans-Joachim Hof 2015)

Defining and linking security risks to user stories in a product backlog by marking stories which contain security risks with an “S-Mark” and then describing that concern with “S-Tags” represents the core of the Secure Scrum methodology. Adding an S-Mark and related S-Tags to a user story indicates that a security concern has beenidentified, described, and formally recognized by the development team. S-Tags provide standardized definitions of specific security concerns flagged by an S-Mark. For example, attaching an S-Tag to an S-Marked user story labelled “XSS” represents a potential Cross Site Scripting attack allows developers, including those with no security training, to understand what security risks are present and the path to their mitigation. The success of this methodology was initially confirmed in field tests where university students employing Secure Scrum produced more secure code than the control groups who did not. By guiding developers through identification, implementation, verification, and also the process which defines the conditions of completion, Secure Scrum succeeds in contributing to the development of secure applications.

Figure 2 – Usage of S-Tags to mark user stories in the Product Backlog and to connect user stories to descriptions of security related issues. Christophe Pohl and Hans-Joachim Hof, 201

A second defining feature of Secure Scrum is the linking of a descriptive security layer to a user story describing the security concern. Scrum defines a security user story as an application use case where as a consequence of anattack such as data theft, a loss “that may occur whenever the functionality that implements the user story getsattacked or data processed by this functionality gets  stolen or manipulated” (Pohl and Kohl 201). In more concrete terms, this can be represented as “If an attacker gains access to this information, we will suffer $X in damages”. Although it may not always be possible to attach a loss of value, doing so will allow decision makers to understand the significance of a potential attack by employing a consistent and systematic methodology to quantify and evaluate threat mitigation efforts. If a security team can raise the cost of exploiting a vulnerability to the point where it equals or surpasses the potential value of exploiting the vulnerability, that vulnerability is considered mitigated.

Final Thoughts

The Secure Scrum methodology offers a clear, systematic, and effective means of  integrating security, however, it also inherits a number of Scrum’s weaknesses. In particular, Secure Scrum’s ability to establish and schedule longer term goals remains problematic, a problem it inherits from Scrum which overlooks documentation processes critical to many security practices. Another challenge posed by Scrum is the expectation that team members have interchangeable skillsets which would mean developers need to know security controls and secure programming techniques . Within the field of application security, this represents an ambitious undertaking as the skillsets of application security professionals are often difficult to duplicate and also in high demand.

With minimal modifications to Scrum’s agile methodology, Secure Scrum allows the integration of security practices that contribute to secure application development while also conserving Scrum’s quick, responsive approach. Although long term planning and the creation of documentation remain challenging activities, as is generally the case with agile methodologies, its success at integrating security within the software development life cycle makes Secure Scrum a clear upgrade over Scrum for identifying and mitigating application security concerns.

Related Reading

Secure Application Configuration Basics

In June of 2016 it was revealed that a database maintained by a large data brokerage company was hacked exposing 154 million US voter records and personal details like gun ownership, positions on gay marriage, and email addresses were retrieved. Database misconfiguration was the cause, the CouchDB database which stored the information was not configured to require authentication in order to access the voter records it held. Secure configuration practices could have ensured the database could only be accessed by authenticated users preventing the breach.

Secure configuration is a reflexive application and environment hardening process whose objective is to minimize an application’s attack surface. Numerous paths can be taken to reach this end including removing or disabling unnecessary application functions, modifying configuration defaults, customizing error messages, and ensuring deployed builds removing deployment files and credentials. Although these secure configuration practices represent only a few of those available they share a basic motivation, to simplify and minimize an application’s operational footprint while taking into consideration how the application interfaces with its environment.

Before You Start

Before developing secure configuration practices an operational baseline  should be established for the applications, plugins, scripts, and other software components your organization employs. Practically, this means taking an inventory of applications and software components that coexist with your own and tracking information like version numbers and upgrade paths. The more you know about your application and its environment the better positioned you are to ensure that the configurations being used are, and continue to be secure. Established best practices like those published by OWASP should be used to evaluate to your baseline and contrast your progress ensuring your secure configuration practices continue to improve.

Secure Configuration Strategies

There exist broad secure configuration strategies that organizations can implement to improve their security posture.

Minimize Attack Surface

The process behind minimizing the attack surface available to an attacker can be summarized with the idea that “simpler is better”. In practice this means simplifying functionality and limiting user access to only what is absolutely necessary for the task at hand. More concretely, an application with a single purpose will not have supplementary features, reflective of a larger code base, which increases the probability of coding errors with security implications being exploited. Promoting applications and functions that have a single purpose when possible will contribute to the development of more secure applications and environments.

Low Hanging Fruit

In many cases, practices that can enhance the security posture of an application are simple and inexpensive to implement. For example, forgetting to disable PHP’s “display_errors” in a build destined for a production could eventually reveal clues about how the application is structured giving attackers additional information they could use to break into your application.

Consistency

Ensuring consistency in the processes your organization uses to transition between development and production environments will minimize changes that must be made when deploying a new build and reduce the possibility of misconfiguration. Although some elements like passwords will need to change, simplicity will promote security while also reducing time.

Deployment Orchestration

Deployment orchestration provides organizations with the opportunity to create and manage a set of secure configuration files for all applications and their environments in a central location. These tools facilitate quickly pushing updates to software, plugins, libraries, and their wider environments as they are approved using a timeline and process carefully controlled by administrators. Additionally, orchestration ensures through the use of an interval defined by administrators, an application, its environment, and any additional components remain configured in the manner originally defined by administrators by proactively reverting changes that don’t match the default specified by administrators.

Final Thoughts

Reducing an application’s attack surface, taking advantage of low hanging fruit, and employing automation afforded by orchestration are effective strategies which will reduce the possibility of human error contributing to a security bug. Ultimately, these secure configuration practices attempt to balance usability and security and care must be taken to ensure that the personal information users trust organizations with is managed carefully, where mistakes like forgetting to assign a username and password to a database holding the records of 154 million people aren’t disclosed carelessly.

Recommended Reading

Application Security Code Review Introduction

security best practices

Introduction to Security Code Review

Secure code review process systematically applies a collection of security audit methodologies capable of ensuring that both environments and coding practices contribute to the development of an application resilient to operational and environmental threats.

In practice, code reviews can take on numerous forms including lightweight code discussions or more involved processes such as pair programming, over the shoulder programming, and tool assisted practices. More advanced methodologies involve threat modeling, automated static code analysis, manual inspection, and formalized communication methodologies.

Both pair and over the shoulder programming involve two programmers reviewing the code as it is being produced while frequently switching roles. Static tool analysis focuses on “white box” testing where a security professional analyzes an application’s source code using automated tools such as static code analysis tools and scripts to locate issues. All code review practices aim to identify security flaws in code, ensure requirements are met, and also share knowledge among developers growing an organization’s capacity to respond to security challenges it faces.

Although some perceive the secure code review process as overly complex, trusting passive solutions like firewalls to secure applications will fail to keep pace with a rapidly evolving threat landscape. Today, secure application development necessitates an active, structured, and comprehensive security audit strategy capable of revealing security issues other methods overlook. To accomplish this, code review relies on curated lists of critical vulnerabilities, checklists, automated tools, threat modelling, and human intervention to provide contextual clarity to findings and consequently, produce a clearer understanding of the security challenges application developers will have to overcome.

Code Analysis Resources & Tools

At the heart of the code review process is the content that will fuel the process. For example, the Open Web Application Security Project’s (OWASP) Top 10 is a list of what OWASP considers to be the “10 most critical web application security risks” and provides the reader with a description of the vulnerability, examples of possible attacks, threat mitigation strategies, and additional relevant resources. OWASP cheat sheets and checklists are useful aides to the complex review process. Both the OWASP Top 10 and their checklists are freely available on their website and will help ensure critical vulnerabilities and review components are not overlooked during code review.

Threat Modelling

Threat modelling will provide your organization with a vantage point capable of better identifying threats and formulating responses by providing context to security efforts. Using a structured threat modeling process to decipher the relationships between an application’s components, it becomes possible to identify design flaws, critical components or other modules that need more closer look. It is unlikely that the application’s design as well as it is underlying environment will remain constant throughout the project’s life. As such, the threat model should be treated as a living document and the threat modelling process as a marathon, not a sprint.

code review

Static Code Analysis

Automated static code analysis tools are another essential component of the review process, offering near 100% code coverage and the ability to expose vulnerabilities invisible to others methodologies. For example, the discovery of an XSS or SQL injection vulnerability with a static source analysis tool could lead to searching the codebase for similar vulnerable coding patterns, a time intensive endeavor and potentially impossible if attempted by hand.

The Application Security Professional

Lastly, binding the secure code review process together is the security professional who provides context and clarity. While automated tools can easily outperform their human counterparts in tasks like searching and replacing vulnerable code patterns within an immense codebase, they fall short in a number of other areas. The mind of an experienced security analyst is indispensable to tasks such as the identification of application logic issues as the ability to reflexively examine the code and its development process has yet to be duplicated by our automated counterparts. Ultimately, the code review process will be advanced by combining the strengths of automated tools and those of security professionals, allowing security teams to reveal a comprehensive array of risks and effectively convey their business impacts, an outcome neither could accomplish about on their own. Want to learn more about our code review performed by application security professionals – check it out here.

code review

Was this article helpful?

Download our secure code review checklist.

[ Github & Excel ]

Differentiating Federated Identities: OpenID Connect, SAML v2.0, and OAuth 2.0

security best practices

OpenID Connect vs. SAML 2.0, vs. OAuth 2.0

The gradual integration of applications and services external to an organization’s domain motivated both the creation and adoption of federated identity services whose evolution continues to this day. Single sign-on (SSO), a forerunner to identity federation, was an effective solution which could manage a single set of user credentials for resources which existed within a single domain. However, the growth of high quality third party applications pushed organizations to rely on tools outside the domains they controlled and beyond the scope of SSO. Federated identities offered organizations the opportunity to preserve the benefits of SSO while extending the reach of a user’s credentials to include external resources which reduces costs and when implemented correctly, can increase security.

Three protocols employed in the majority of federated identity deployments will be examined, OpenID Connect, SAML v2.0, and OAuth 2.

OpenID Connect

OpenID Connect was launched in February of 2014 and is the current iteration of the open standard which allows users to employ a single set of credentials, managed by a preferred 3rd party OpenID Connect identity provider (IDP) such as Google, Microsoft, and PayPal, to authenticate with numerous online services. OpenID Connect has been built on top of the OAuth 2.0 protocol and employs REST/JSON for messaging. For developers, OpenID allows developers to authenticate users without creating and maintaining a local authentication system. Instead, developers can rely on the expertise of an organization committed to the secure implementation of an identity protocol capable of ensuring identities they manage are authentic and to the best of their abilities, authentic. The OpenID Connect protocol, launched in February of 2014, can be used across numerous platforms including mobile in addition to a varied array of clients such as JavaScript to produce confirmable identity assertions. (Source: http://openid.net/connect/faq/)

The OpenID Connect specification defines three roles:

  • The end user or the entity that is looking to verify its identity
  • The relying party (RP), which is the entity looking to verify the identity of the end user
  • The OpenID Connect provider (OP), which is the entity that registers the OpenID URL and can verify the end user’s identity

Final-OpenIDConnectUseCase

Security Considerations

1. Mix-Up Attacks, July 16, 2016

OpenID describes mix-up attacks as follows,

“Broadly, the attacks consist of using dynamic client registration, or the compromise of an OpenID Provider (OP), to trick the Relying Party (RP) into sending an authorization code to the attacker’s Token Endpoint. Once a code is stolen, an attack that involves cutting and pasting values and state in authorization requests and responses can be used to confuse the relying party into binding an authorization to the wrong user.

Many deployments of OpenID Connect (and OAuth) in which the configuration is static, and the OPs are trusted, are at greatly reduced risk of these attacks. Despite that, these suggestions are best current practices that we recommend to all deployments to improve security, with a particular emphasis on more dynamic environments.”

2. Covert Redirect, May 2014

An end-user’s authorization server could be employed in a phishing attack by an attacker who abuses the end-user’s authorization server redirect URI parameter, instead using it as an open redirector.

SAML v2.0

Security Assertion Markup Language (SAML) is an XML-based open standard for exchanging authentication and authorization data between parties. SAML was launched in 2001 and is managed by the OASIS Security Services Technical Committee.

The SAML specification defines three roles,

  • The principal, which is typically the user looking to verify his or her identity
  • The identity provider (idP), which is the entity that is capable of verifying the identity of the end user
  • The service provider (SP), which is the entity looking to use the identity provider to verify the identity of the end user

Final-SAMLV2UseCase

Security Considerations

1. XML Signature wrapping (XSW), November 2011

A group of researchers presented a paper in 2011 where they used an XML Signature Wrapping vulnerability to impersonate any user.

2. HTTP Referrer Attack, August 2012

A HTTP Referrer attack occurs when a message moving between the service provider and identity provider is intercepted and its referrer is modified prompting the service provider to return the authentication response to the attacker who can then use it to authenticate with the service provider later on. Ensuring the transport layer uses SSL/TLS can mitigate the HTTP referrer attack.

3. Signature Exclusion Attack, August 2012

A signature exclusion attack relies on an application designed to accept a message whose signature element has been removed. Ensuring the authenticity and integrity of a message by designing an application which requires a signature will mitigate this attack.

OAuth 2.0

OAuth 2.0 is an open standard launched in 2006 focusing exclusively on authorization, differentiating itself from OpenID and SAML which were created for the purposes of authentication.

The OAuth 2.0 specifications define the following roles,

  • The end user or the entity that owns the resource in question
  • The resource server (OAuth Provider), which is the entity hosting the resource
  • The client (OAuth Consumer), which is the entity that is looking to consume the resource after getting authorization from the client

Final-OAuthUseCase

Security Considerations

1. Account Takeover Vulnerability, November 2016

In November of 2016 three researchers presented a paper at Black Hat Europe describing what was the most common OAuth 2.0 vulnerability as follows:

“The root cause of this vulnerability is a common, but misplaced trust in the authenticating information received by the 3rd party app’s backend server from its own client-side mobile app, which in turn, relies on potentially tampered information obtained from the client-side mobile app of the IdP,” the security researchers explain…..”

The researchers went on to suggest a number of precautions which could help mitigate the vulnerability including “clearer and more security-focused usage guidelines for their OAuth 2.0 based SSO APIs”. They also recommended that mobile apps which require backend servers only trust direct communications with the identity provider which would issue a private identifier per mobile app in addition to conducting more thorough mobile application security testing.

The OAuth 2.0 protocol relies on the underlying transport layer to provide confidentiality and integrity by employing technologies like SSL/TLS. Since OAuth 2.0 does not support signature, encryption, channel binding, or client verification, care must be taken in its implementation.

2. Client account hijacking through abusing session fixation on the provider

Only affects the “social login” scenarios.

  • Authentication data provided by the authorization server should not be trusted.
  • Use CSRF protection for the account linking and always request explicit consent from the end-user.

3. Account hijacking by leaking authorization code

  • Authorization code may leak to external parties through the HTTP Referer header if there are third party components in the redirect URI page.
  • Use only small set of valid redirect URIs and do not allow embedding any third party components on those pages.

4. Leaked client credentials threat

Even though not much can be done with the leaked client credentials (make calls to access token endpoint), they should be stored in a secure manner in the server side.
If client credentials leak, they should be reset.
Tricks to bypass Redirect_uri validation
Only use explicit set of redirect URIs.

5. Replay attack

It may not be guaranteed that authorization code is one-time use. Therefore it might be feasible to build mechanism to prevent replaying authorization co

At a Glance

Differentiating OpenID Connect, OAuth 2.0, and SAML

Current version OpenID Connect (2014)OAuth 2.0SAML 2.0

 OpenID ConnectOAuthSAML
Dates From200520062001
Main PurposeSingle Sign-On for Consumers
+
Identity/Auth Services
API Authorization Between ApplicationsSingle Sign-On for Enterprise Users
ProtocolsXRDS, HTTP, JSONJSON, HTTPSAM, XML, HTTP, SOAP

Other Protocols

There are a growing number of federated identity options, here are a few examples.

Higgins

Higgins is an open source protocol that allows users to control which identity information is released to a third party. Higgins is focused on developing three areas, the first being active clients which utilize browsers across platforms including mobile. Secondly, for the Higgins 2.0 launch a personal data store (PDS) is being developed which will give its users to control the data they share. Lastly, developing a IMI and SAML compatible Identity Provider enabling IMI and OpenID compatibility.

U-Prove

U-Prove is a token based credential established in 2012 whose core specifications were released under Microsoft’s Open Specification Promise. U-Prove tokens can contain any kind of attribute, similar to public key infrastructure (PKI), yet differs in two significant ways. First, the token’s public key and signature cryptographic “wrapping” is done in a way where the attributes “contain no correlation handles making it impossible to track U-Prove tokens even in a situation where insiders might collude”. Secondly, U-Prove users have the ability to disclose only the minimum information required such as their range being within a range and not their actual age.

MicroID

MicroID is a decentralized identity layer for the web and microformats that allow anyone to claim verifiable ownership of their websites in addition to content hosted anywhere by using an identifier composed of a hashed communication and identity.

Conclusion

With the continued proliferation of hybrid systems, protocols, and countless devices federated identities have firmly entrenched themselves in our daily lives. However, the convenience promised by technologies like OAuth 2.0, SAML 2.0, and OpenID Connect necessitate that the attack surface they generate be carefully scrutinized not only during deployment but on an ongoing basis ensuring the service they provide does not become as convenient for attackers as they are for users.

Mxi Case Study – OWASP Compliance

Executive Summary

MXI is the Ottawa-based developer of Maintenix, an aviation maintenance management software solution that many of the world’s leading airlines depend on. As MXI expanded to provide its solution to ever larger airlines in the US and elsewhere, more and more questions arose regarding the company’s adherence to application security. In order to eliminate this objection to further sales, MXI enlisted Software Secured to help:

  • Accurately assess Maintenix compliance to Open Web Application Security Project (OWASP) Top Ten.
  • Provide a demonstrable plan with accurate timelines to remediation and compliance.
  • Provide proof to their potential clients that MXI takes security seriously.

The Challenges for MXI

MXI employs senior engineers and architects with deep knowledge of the technical aspects of its solution. However, MXI was looking for a team of security experts who could identify the application security risk in large applications with multi-millions of lines of code.

They also needed to help the development team prioritize.and remediate the findings while minimizing the effect on the deliverables of the development staff.

How Product Helped

Software Secured used our comprehensive approach to application assessments, which combines web application penetration testing and security code review to assess Maintenix. The Software Secured team effectively helped MXI reach OWASP Top Ten compliance status, which led to MXI closing a deal with one of the largest operators in North America.

Using the Software Secured managed application security service, MXI was able to integrate application security into its software development life cycle, which gave the company peace of mind that clients’ data is protected from application-level attacks, as well as the the ability to show clients immediate evidence of adherence to application security best practices.

Results, Return on Investment, and Future Plans

Not only did Software Secured help MXI become OWASP Top Ten compliant, we also provided a detailed code audit and a prioritized list by risk of the issues that could lead to cyber attacks. Software Secured also provided remediation steps to fix these issues. MXI was able to provide the assurance and guarantees required by its clients. More and more customers are requesting proof that your software has no security vulnerabilities. With Software Secured on your team, you will have one less hurdle to overcome in your sales cycle.

Getting Started Integrating Security into the SDLC

Many information security problems can be traced to coding flaws. That’s not news, yet many of those coding flaws continue to appear in programs year after year.

Why does that happen? As cut and dry as coding can be, there’s still a lot of art involved in the process. Whenever art enters the picture, so does inconsistency because every programmer has their own approach to how they code. That can make code review during an application’s lifecycle challenging to both developers and security teams, who often are at odds with each other because they have different priorities.

Developers want to create apps with cool features, meet development deadlines and minimize the time it takes to bring their software to market. On the other hand, security teams want to make sure an app isn’t vulnerable to malicious attacks when it’s released. Moreover, security teams typically don’t have members with coding backgrounds, which can create communication problems between the two groups.

Conflicting priorities or not, the growing problem with information security — the latest figures from the Identity Theft Resource Center show in the first nine months of this year alone, there were 708 breaches exposing 28.8 million records — has persuaded many organizations that they need to step up their security game.

One way to do that is to create more secure apps. And the best way to do that is to add security activities to all phases of the software development lifecycle (SDLC).

Brass Buy-In

Integrating security and risk management into the SDLC, though, can seem daunting to an organization. “Where do we even start to do this?” it may ask itself. One place to start — as is the case with most major initiatives in organizations — is with upper management buy-in. In addition to giving the SDLC security program gravitas, upper management support can resolve conflicts between developers and the security team and align business and security goals so they complement each other. For example, time will have to be added to an app’s project plan to accommodate security. For organizations with a desire to rush apps to market, that may not sound very appealing, but that extra time can amount to added value for the app. A popular rule of thumb often cited is $1 spent on security can save $10 during development and $100 after release. Upper management support can also remove friction between security and development groups through training that enables them to understand each other’s needs. That’s especially true for members of the development team who may have some fear and uncertainty about security because it’s alien ground for them.

What’s more, training everyone involved in the various stages of the SDLC — developers, testers, architects and others — in software security principles can save money by fixing problems at the right place by the right person.

Along with training, support is one of the cornerstones of an SDLC security program. Changing the ingrained behavior of developers and getting them to acquire good security habits is challenging enough, but it will be even more challenging without the allocation of the processes, tools and mentors to help development teams deliver secure software on time.

Small Steps for Big Results

After obtaining executive support, organizations embracing SDLC security will typically start with penetration testing and code review. Those tasks can be implemented with relative ease because the tools for performing them are well-known and readily available in the industry. Moreover, those tasks can often be automated or easily integrated into the SDLC. Tools for source code analysis are usually linked to bug tracking and reporting schemes so security problems will automatically be brought to the attention of the development team where the issues can be fixed before the code is sent to Quality Assurance or production. Many of the tools can also be tied to Integrated Development Environments, such as Eclipse and Microsoft Visual Studio, so making iterative fixes during the coding cycle can be done quickly and within ordinary workflows.

Whenever a change appears intimidating, it’s best to approach it in small steps. That’s true for adopting an SDLC program, too. For instance, a routine code scan can be added as an automated step to the nightly build process. Another small step could be adding a scanning “toll gate” at the end of each phase of the project. To pass through the toll gate, all critical vulnerabilities uncovered by the scan would need to be fixed. Divvying up the burden of the SDLC security process can also make it less onerous. The Quality Assurance team could act as an intermediary between the security folks and developers. In some cases it might even make sense to have the development team do the static analysis of an app’s code and the QA and security teams do the dynamic scanning of it.

Incorporating security into the SDLC may be challenging, but those challenges can be surmounted. All it takes is cooperation and commitment.

There is More to Application Security than Bulletproof Code

In recent months, momentum has been mounting for developers to write code for their applications that is more secure. While writing secure code is vital to the security of an organization, it’s not the final word in creating applications resistant to attacks.

A number of potential run-time flaws can be identified and corrected while source code is being written. Nevertheless, there may be errors in an application that can only be discovered when the application is running or under attack. Such errors may stem from code paths being taken given the current data and the current state of the application, how memory is used, how an application functions over time or even something as simple as how a program displays error messages.

When a developer reviews how their application is handling errors, they may see nothing insecure about the code and indeed, there may be nothing insecure about it. However, when the application runs and an error is produced, the message explaining the error may be creating a security risk that wasn’t apparent to a programmer concentrating on code alone.

Error messages need to negotiate narrow straights. They need to be meaningful to a user and give support staff diagnostic information needed to correct the errors but not give too much information to a hacker. For example, when a user commits an error when logging into a system an error message such as “User Name Correct, Password Incorrect” is to a hacker what a Milk Bone is to a dog. Instead of making the attacker wonder if the credentials they’re trying to use are any good at all, they know they’ve got a valid username and need to focus on cracking the password. Better yet, the attacker can use stolen credentials to attack new targets, the only piece of information needed is the username/email.

Open Source Headache

3rd party dependencies and particularly open-source continues to be a security headache. Securing native code alone also isn’t enough to protect applications because developers don’t completely control all the code used by their programs. Up to 90 percent of an application can be made up of third-party components. A developer can write rock-solid secure code for their apps but they still don’t know how secure those third-party components are. Many of those components contain open source code with flaws. It’s estimated that 50 percent of the world’s largest companies use applications built on open source components with vulnerabilities.

Vulnerabilities in open source components can be a real problem for developers, especially developers of web applications. “Component vulnerabilities can cause almost any type of risk imaginable, ranging from the trivial to sophisticated malware designed to target a specific organization,” the Open Web Application Security Project noted at its website. “Components almost always run with the full privilege of the application, so flaws in any component can be serious.”

Making matters worse, OWAAP continued, development teams don’t focus on keeping the components and libraries they use up to date. “In many cases, the developers don’t even know all the components they are using, never mind their versions,” it added. “Component dependencies make things even worse.”

What’s more, unlike commercial software makers who keep their customers apprised of recently discovered flaws and push fixes to them, most organizations don’t have a reliable way of being notified of Zero Day vulnerabilities or available patches about open source components.

Middleware and Config Vulnerabilities

Not only do apps work with vulnerable components, but may also be called on to work with middleware. Middleware is useful because it mediates network services to applications through devices like web and application servers. However, middleware can create its own security problems, problems that won’t be apparent by code review alone of an application.

For example, if an application contains authentication and access privilege control problems with the middleware, that won’t be discovered until the application runs and interacts with the middleware. The same is true for potential security vulnerabilities that could lead to interception or viewing of information in a workflow or the integrity of transactions on the network.

Errors in configuration files can be another fertile area for vulnerabilities that won’t become apparent by securing code alone. A programmer can be very meticulous about security, but if their application is misconfigured it can be as vulnerable as a sloppily coded one. Moreover, the problem can be exacerbated by many configuration settings defaulting to values that introduce vulnerabilities into the application or the middleware it’s using.

Making matters worse, web application config files can be changed at any time — even after an application is in production. A well-meaning administrator could open an application to attack by diddling a config file.

Any organization concerned with protecting its information assets needs its development teams to write secure code, but it can’t stop there. It has to test apps  as they’re running, too.

Why Your Network Protection Strategy Won’t Protect Your Applications

After years of cajoling by security experts, organizations are finally starting to embrace the layered approach to cyber security. They realize firewalls can be breached and network traffic needs to be monitored if they’re to keep attackers at bay.

Many organizations, though, aren’t quite sold on protecting the next layer ­­the application layer, which can be the most complicated layer to secure. That’s because it could be a costly endeavor, which is outside the comfort zone of the typical stakeholders such as developers, project managers, product managers and quality assurance engineers.

Besides, if an application starts misbehaving, won’t the network security systems pick up that activity and shut it down?

Yes and no. Just as a firewall isn’t a perfect shield against attacks on an organization’s endpoints, devices and network. Network security solutions are imperfect, too. They can keep some but not all threats from reaching your apps.

Intrusion Detection/Protection

For example, an Intrusion Protection System (IPS) is designed to sit on the network and examine packet traffic passing through it. It can match data in the packets to a signature database ­­ much as antivirus programs do ­­ to flag malicious traffic. It can also identify anomalies in traffic based on a norm developed by observing traffic over time. In addition to logging suspicious traffic and alerting defenders to it, the IPS can be programmed to block potentially harmful packets from getting to applications.

There are a couple of drawbacks to an IPS that can allow attackers to reach an organization’s applications and if there are any vulnerabilities in those apps, exploit them. First, an IPS can’t understand web application protocol logic. That means it can’t determine the difference between a normal or malformed request at the application layer. The shortcoming can result in an attack being undetected or prevented. Some of the evasion techniques include obfuscation, encryption, and fragmentation. For example, the following is an example of obfuscated JavaScript:

The following piece of JavaScript, which is human readable

xmlDoc = xhttp.responseXML;
window.onload = function() {
document.document.getElementById('label').innerHTML = xmlDoc;
};

Could be obfuscated to (among many ways):

var _0xbc37=["\x72\x65\x73\x70\x6F\x6E\x73\x65\x58\x4D\x4C","\x6F\x6E\x6C\x6F\x61\x64","\x69\x6E\x6E\x65\x72\x48\x54\x4D\x4C","\x6C\x61\x62\x65\x6C","\x67\x65\x74\x45\x6C\x65\x6D\x65\x6E\x74\x42\x79\x49\x64","\x64\x6F\x63\x75\x6D\x65\x6E\x74"];xmlDoc= xhttp[_0xbc37[0]];window[_0xbc37[1]]= function(){document[_0xbc37[5]][_0xbc37[4]](_0xbc37[3])[_0xbc37[2]]= xmlDoc}

Second, while an IPS can protect a system from known vulnerabilities, it can’t protect it from all potential vulnerabilities. That can result in the creation of a burdensome number of false positives ­­ alerts about vulnerabilities that aren’t vulnerabilities. False positives not only divert valuable human resources to dead­end tasks, but they divert them away from real attacks, which slows response times to those assaults. For example, there is no way for an IPS/IDS to know that an account ID of 12345 should be accessible to the user but not 12346 for example in the following URL:

http://www.vulnerablewebsite.com/account/retrieveAccount?accountID=12346

A variation on the IPS is the Hosted IPS (HIPS). It has a better understanding of what’s going on at the application layer. For example, it knows what protocols TCP and UCP packets can and cannot carry. When it sees deviations from those norms, it can alert defenders to them. Still, even HIPS lacks a full understanding of web application languages and logic.

Web Application Firewalls

Another network solution designed to protect applications from attack is the Web Application Firewall (WAF). It’s designed to protect applications that face the public Web in ways that an IPS can’t. For instance, WAFs sit in front of an application and examine traffic only to the app. That allows them to do a more thorough job of analyzing the application layer.

In addition, while an IPS will monitor net traffic against signatures and anomalies, WAFs can understand the application’s logic. It can determine what an application is requesting and what it wants to be returned. In that way, it can guard the application against common threats such as SQL injection, cross­site scripting, session hijacking, parameter or URL tampering and buffer overflows.

What’s more, it enables the WAF to not only detect attacks that are known but those that are unknown. It does that by identifying unusual or unexpected behavior. When it sees such behavior, it blocks it or alerts someone of it. Most WAFs operate using signatures (rules) or using a blacklist, two techniques that have obvious shortcomings.

Some Problems with WAFs

The problem with WAFs is they can be penetrated just like other kinds of firewalls. For example, the policies a WAF uses to filter out malicious traffic can be exploited to bypass it. They can also be bypassed with the same attacks used against web apps, such as SQL injection. For example, the following SQL injection payload could easily be picked up by a WAF:

http://www.vulnerablewebsite.com/account/retrieveAccount?accountID=11 Union Select 1,2,3,4,5--

However, the following payloads might not be easily detected by your average WAF;

/*!%55NiOn*/ /*!%53eLEct*/
%55nion(%53elect 1,2,3)-- -
+#1q%0AuNiOn all#qa%0A#%0AsEleCt
union%23foo*%2F*bar%0D%0Aselect%23foo%0D%0A

WAFs can also be difficult to configure. “Features” in a misconfigured WAF can become an open door for hackers. A misconfigured WAF can also be a headache for an organization. Such a WAF will start flooding the network with false positives. The network will take a performance hit, users will start grousing and unwanted denial of service issues will arise. When those conditions occur, most organizations will turn off the WAF’s blocking functions. That defeats one of the main purposes of installing the WAF in the first place.

Network protection strategies that use tools like an IPS and WAF can only provide a degree of protection for an application. That means once the network defenses are breached, the application, unless it was built from the bottom up with security in mind, will be at the mercy of the attackers. As

more organizations buy into baking security into the development process, we can expect the nature of network security to change. That’s because many of the threats addressed by technologies like WAF will be addressed through integration and testing during the development process instead of after the code is up and running online.