Five Reasons Your Internal Application Security Program Is Failing

Why is your internal application security program failing?

Having a solid internal application security program is more important than ever. Gone are the days when organizations thought that perimeter security was enough to thwart application-level attacks. Increasingly, organizations are seeing the problem. And now they’re taking action to prevent coding flaws that could lead to data breaches and application-level cyberattacks.

They start by getting all hands on deck. Following are a couple of security presentations, tutorials, and even a round of security training. Yet, three months later, no real improvements have been made. Developers, managers, and executives are confused about why things didn’t work out. Then, external and internal security assessments come back with the same persisting issues such as SQL injection and XSS.

Unfortunately, planning and building out an internal application security program can be a pain. We’ve put together the top five reasons we’ve seen while working with organizations for why internal application security programs fail.

1. Depending Too Much on Static Code Analysis Tools

Many organizations lead the program with a static code analysis tool, and why not? Likely because the tool was sold to them as a bulletproof magic wand that will identify all the security flaws and help fix those issues.

No internal application security program is really scalable without a good tool. However, the problem starts when organizations put all their faith in the tool. They do this without developing a proper application security program. Typically, the program should include guidelines, proper training, and identification of the issues that the tool usually finds. Consequently, these are usually the issues that go missed.

2. Providing Training Without Actual Tools

Some organizations lead by training their development staff without giving them the capability to execute the techniques they learned in the training.

Understanding what a vulnerability looks like is the first step. But it must be followed by systematic ways of finding that vulnerability, as well as reliable, time-effective techniques on how to verify vulnerabilities.

Keep in mind that your software developers are not security experts. Likewise, they are not QA experts or usability experts. They should have a strong background in security. They should be able to find and fix basic forms of vulnerabilities. But they can’t be experts in security unless it is their full-time job.

3. Lack of Goal Setting and Proper Communication

For the longest time, application security was an IT issue. IT staff, who weren’t trained in application security, used the same techniques to test applications as they used to test networks. That mistake led to years and years of exposed applications, delayed vulnerability remediation, and lots of time spent on what could have been a very straightforward vulnerability mitigation process.

Now, many organizations are putting the load on development teams to solve the application security problem. However, this expectation is not clearly defined or properly communicated.

Development staff are usually engineers. Typically, engineers are not very good working in vague modes. Often, they know that their software development tasks are done when the code works with no bugs. However, they have no idea when their code is secure. Most often, nothing special happens when the code is secure versus when it is not secure.

4. No Champion

Usually, organizations that are doing scrum properly use a scrum master. A scrum master is a facilitator for a product development team that uses Scrum.

We can argue that proper application security programs require a security master­. This would be someone who facilitates the secure software development efforts and ensures that security activities are done. Therefore, a security master consistently makes sure that security bugs are addressed and that developers receive the training they require. Finally, a security master guides developers through the secure development process, answers questions or points them in the right direction.

5. No Alignment with Personal Goals

Often, this is the one reason that executives will never hear.  Yet, developers keep saying over and over to themselves and to each other: What’s in it for me? Why should I put the extra time and effort into security? How does it align with my role, my career, my promotion, my raise? Heck, forget about me, how does security align with the company’s goals and future?

A frowning executive might answer, “Well, if a data breach happened and we lost customers’ data, that would reflect badly on our brand and the health of our financials.”

To that, a developer responds, “Great, then which one is more important: security or deadlines?” Developers are usually under very tight deadlines. Adding another task without proper alignment to their careers doesn’t work. When alignment is strong, magic starts to happen. Usually, developers find creative answers to difficult security questions. The famous question, “Which one is more important: security or deadlines?” does not become an issue.

The reason? Because good developers will find ways to achieve both.

What do you think? Have you seen one or more of these reasons for failure on an application security program? Are there other important ones to add? Once you understand the cause, it’s that much easier to fix.

We help DevOps teams secure their applications.

Book a call with us to see how our PenTesting as a Service (PTaaS) can benefit your SDLC and bring confidence that your applications are secure.
BOOK A CALL

Subscribe to our newsletter for more great content on security, industry updates, opinion pieces & more!







Why don’t developers write more secure code?

Developers have been rapped in some circles for writing code with security flaws, but is such criticism justified?

Here are our top 6 reasons why developers don’t write more secure code.

Where is security on developers’ priority list?

Programmers certainly have a lot on their plates and while security has been a burning issue in recent times, it hasn’t been a top priority for developers.

A survey of more than 200 developers conducted a few years ago identified half a dozen priorities of developers. In order of importance, they were functions and features as specified or envisioned, performance,usability, uptime, maintainability and, at the bottom of the list, security.

Less Help Received from the Quality Assurance Department

Without a doubt, security can get in the way of some of those priorities, which is why developers moan about it. However, developers have had to adjust to similar scenarios in the past. For example, Quality Assurance used to generate debates about what’s the right ratio of developers to test engineers. Today that’s less of an issue because now every developer takes on responsibility for testing and builds unit tests every time they add new features and functionality. QA testers haven’t totally disappeared but there are fewer of them than there used to be ­­ most of them, performing manual tests that are difficult to automate. The same thing has to be done with application security. It needs to be embedded into the workflow where ­ in the long run ­­ it can help productivity, not hurt it.

Most Developers Don’t Know What Secure Code Looks Like

Although there may be some resistance by developers to expanding their roles in securing software, most want to write secure code but many don’t know what that means. They know some basics ­­ validate input, check for buffer overflows, encrypt data in transit and limit privileges ­­ but many aren’t equipped to address advanced problems ­­ authentication weaknesses, application logic flaws and advanced input validation.

Although there may be some resistance by developers to expanding their roles in securing software, most want to write secure code but many don’t know what that means. Most developers don’t know what secure code looks like.

Security is a Marginal Topic in High-Ed Curriculums

That knowledge gap is not surprising because developers haven’t received a lot of training about writing secure code. Just as security isn’t high on a developer’s list of priorities, teaching students how to incorporate security thinking and awareness into code design, development and testing hasn’t been high on the priority list of universities either. In a  recent study of cybersecurity education at the top 36 computer science programs in the United States, researchers found that none of the top 10 programs require a cybersecurity course for graduation and three of them don’t even offer an elective course in cybersecurity.

Security Tools are a Major Frustration Point to Developers

If a code warrior doesn’t know they’re introducing security flaws into their code, there’s a tendency to believe that what they’re producing is secure. That’s especially true when reports from the security team contain vulnerabilities that are not vulnerabilities at all but false positives. So if an organization expects their developers to buy in to taking greater responsibility for security, it needs to make sure it has good tools ­; vulnerability scanners, source code analyzers and savvy application security SMEs to educate developers, without prejudice, that their code needs improving.

So if an organization expects their developers to buy in to taking greater responsibility for security, it needs to make sure it has good tools; vulnerability scanners, source code analyzers and savvy application security SMEs to educate developers, without prejudice, that their code needs improving.

Tools can be another pain point for developers who want to produce secure code. It’s not uncommon to hear developers complain about the tools they have lacking the kind of sophistication they need to identify security risks and fix them. On the other hand, many developers aren’t willing to spend the time necessary to tweak those tools to get more out of them with less pain.

Mixed Messages Received by Developers

A fair part of why developers don’t write more secure code, is because developers receive mixed signals about their role in producing secure code. The security industry often touts new products as alternatives to secure coding. That was the case with Web Application Firewalls and RASP ­­ Runtime Application Self­-Protection. The pitch for WAFs was that they could stop an attackers before they could exploit flaws in an application’s code. Theoretically, that diminishes the risk created by insecure code and relieves the pressure on developers to write flawless code. In reality, though, hackers found ways to defeat WAFs, making it still important to produce secure code.

In the same vein, RASP is being sold as the answer to flawed programming. It’s designed to see into applications and shut them down if they misbehave. That’s fine as a temporary fix, but to get the app running again, whatever’s wrong with it needs to be fixed. RASP can reduce the risks created by insecure code ­­ although it’s limited in the classes of vulnerabilities it can protect against ­­ but it’s not going to make an application as secure as it could be if its code was written with security in mind during the design and build phase.

No matter how sophisticated the tools get, they will not run themselves, they need engineers with expert security skills to run them.

Developers can also be sent mixed signals about writing secure code from their organizations. Executive buy-­in to writing secure code is as important as getting the developers themselves to embrace the concept. Unless management understands the value of secure coding and conveys its support of the concept through things like training and purchase of state of the art tools, then any efforts to improve the security practices of coders is unlikely to gain any traction.

13 Tools for Checking the Security Risk of Open-Source Dependencies

Why Should We Look to Check Security Risk?

Did you know that up to 90 percent of an application typically consists of third-party components, mostly open source? And, did you know that more than 50 percent of the Global 500 use vulnerable open-source components? Clearly, open source components are growing in popularity, and it is important to regularly check security risk in these areas.

Today, the software development environment has an enormous amount of work crowdsourced to a large community of open-source developers and communities. Yet, there is very little understanding of the security problems that this creates, let alone ways to manage this risk. We all know that we can’t stop using open source. And, we know that no one wants to stop using it. Recently, in a survey by BlackDuck software, 43 percent of therespondents said they believe that open-source software is superior to its commercial equivalent.

Open source is powerful, and the best developers in the world use it. However, it’s time to stop ignoring the security concerns and start tracking the dependencies in your software. First, I’ll give you a quick analysis of the ongoing security problem of open-source software dependencies and why you should check security risk. Then, I’ll wrap up with a list of tools to start using now to get ahead of the curve.

Software dependencies are often the largest attack surface.

Organizations usually assume most risks come from public-facing web applications. That has changed. Now, dozens of small components are in every application. As such, risks can come from anywhere in the codebase.

Bugs like Heartbleed, ShellShock, and the DROWN attack made headlines that were too big to ignore. In reality, most bugs found in dependencies often go unnoticed.

There are several reasons for this problem. For starters, most organizations do not have accurate inventories of software dependencies used by different applications. Additionally, most organization don’t have reliable means of being notified when zero-days are found or when patches are made available. Typically, they only have a meager notification to check security risk from the community supporting the project.

Open-source vulnerability information is fragmented.

Most organizations search the CVE  and NIST Vulnerability Database for vulnerability information. Unfortunately, these sources provide very little information on open-source vulnerabilities. Information on open-source vulnerabilities is distributed among so many different sources that it’s very hard to track.

Adding insult to injury, OSVDB just closed shop. Previously, they were one of the largest vulnerability databases that was mostly dedicated to tracking open-source-specific vulnerabilities. Unfortunately, this is following others such as SecurityFocus. This led to the emergence of other security repositories such as the Node Security Project for JavaScript/Node.js-specific vulnerabilities and RubySec for Ruby-specific vulnerabilities. Despite this, there are still a lot of projects and ecosystems that just aren’t well covered.

Organizations still believe that open source code is more secure.

The misconception about open source being more secure started with what’s known as Linus’ Law. Linus’ Law is named in honor of Linus Torvalds and formulated by Eric S. Raymond in his essay and book The Cathedral and the Bazaar.  Linus’ famous quote:

“Given enough eyeballs, all bugs are shallow.”—Linus Torvalds

This statement might have been relevant when the book was first published in 1999. However, it is far from relevant nowadays. A bug such as ShellShock existed in the OpenSSL library for more than 22 years. The biggest problem is that organizations still believe that open source code is more secure than commercial code. Just read this Reddit thread to understand how people view this topic.

Don’t get me wrong. I am not suggesting that open source is less secure than commercial. What I am saying is that without intentional effort to secure a piece of code (open source or not), that code is not secure. Intentional efforts mean activities such as code inspection by trained “eyeballs,” dynamic security scanning, and penetration testing, etc.

The open-source ecosystem is more fragile than we think, and that’s scary.

The whole dependency ecosystem is fragile. A recent incident gave the entire NodeJS community a brutal reality check as one programmer almost broke the internet by deleting 11 lines of code. Attackers could have easily taken the namespaces of these packages, bumped the version, and added malicious code replacing the actual expected code.

Luckily, one nonmalicious developer was able to grab over 240 of said packages before they fell into the wrong hands.

Trying to fix the problem.

OWASP recognized this problem and added “Using Components with Known Vulnerabilities” to the OWASP Top 10 in 2013. This is the definition of the issue according to OWASP:

“Components, such as libraries, frameworks, and other software modules, almost always run with full privileges. If a vulnerable component is exploited, such an attack can facilitate serious data loss or server takeover. Applications using components with known vulnerabilities may undermine application defences and enable a range of possible attacks and impacts.”

Different open-source and commercial tools have emerged over the years to tackle this problem. Each tool/service tackles the problem a bit differently. Consequently, my consulting firm has reached out to the project leaders and company CEOs to get their feedback on how they believe their tools contribute to the solution and where they see their tools’ future.

Node Security Project (NSP)

The NSP is known for its work on Node.js modules and NPM dependencies. It also provides tools that scan for dependencies and find vulnerabilities using public vulnerability databases. One example is the NIST National Vulnerability Database (NVD). Alternatively, NSP has its own database, which it builds from the scans it does on NPM modules.

Adam Baldwin from the NSP sees a future where dependency security is part of the SDLC. “Soon you will see a number of products from us including continuous security monitoring and integration with GitHub (and other products) so that you can plug in security monitoring, detection, alerting, and remediation for the areas of your development lifecycle that are relevant to you.”

RetireJS

RetireJS is an open-source, JavaScript-specific dependency checker. The project is primarily focused on ease of use. That’s why it has multiple components. Components include a command-line scanner and plugins for Grunt, Gulp, Chrome, Firefox, ZAP, and Burp. RetireJS also made a site-checking service available to JS developers who want to see if they’re using a JavaScript library with known vulnerabilities.

RetireJS retrieves its vulnerability information from the NIST NVD as well as a multitude of other sources. Some including mailing lists, bug-tracking systems, and blogs for popular JavaScript projects. Erlend Oftedal from RetireJS thinks that security is everyone’s problem and more collaboration is needed. He said, “I would like to see authors of popular open-source frameworks themselves start reporting security fixes to tools such as Retire.js in order to keep the users of their software safer.”

OSSIndex

OSSIndex supports several technologies. It extracts dependency information from NPM, Nuget, Maven Central Repository, Bower, Chocolatey, and MSI (which means it’s covering the JavaScript, .NET/C#, and Java ecosystems). OSSIndex also provides a vulnerability API for free.

OSSIndex currently retrieves its vulnerability information from the NIST NVD. Ken Duck from OSSIndex plans to include automated importing of vulnerabilities from some key mailing lists, databases, and bug-tracking systems in the near future.

Dependency-check

Dependency-check is an open-source command line tool from OWASP that is very well maintained. It can be used in a stand-alone mode as well as in build tools. Dependency-check supports Java, .NET, JavaScript, and Ruby. The tool retrieves its vulnerability information strictly from the NIST NVD.

Bundler-audit

Bundler-audit is an open-source, command-line dependency checker focused on Ruby Bundler. This project retrieves its vulnerability information from the NIST NVD and RubySec. RubySec is a Ruby vulnerability database.

Hakiri

Hakiri is a commercial tool that offers dependency checking for Ruby and Rails-based GitHub projects using static code analysis. It offers free plans for public open-source projects and paid plans for private projects. It uses NVD and the Ruby Advisory Database.

Vasily Vasinov from Hakiri says that future plans for the software include adding integrations with Slack, JIRA, and Pivotal Tracker as well as supporting other platforms such as Node.js and PHP.

Snyk

Snyk is a commercial service that focuses on JavaScript NPM dependencies. New to the scene, Snyk is in a league of its own. Not only does it offer tools to detect known vulnerabilities in JavaScript projects, but it also helps users fix these issues using guided upgrades and open-source patches that Snyk creates.

Snyk has its own vulnerability database, which gets its data from the NIST NVD and the NSP. Snyk’s focus is on scaling known vulnerability handling across the entire organization and its teams. With better collaboration tools and tighter GitHub integrations. Snyk’s CEO, Guy Podjarny, indicated that Snyk’s future plans include building runtime tools that will give developers better visibility and control when running open-source packages on production systems.

Gemnasium

Gemnasium is a commercial tool with free starting plans. Gemnasium has its own database that draws from several sources. However, the vulnerabilities are reviewed manually on a daily basis. As such, advisories are not automatically published.

Gemnasium provides a unique auto-update feature that uses a special algorithm to test smart combinations of dependency sets instead of testing all the combinations. This saves a bunch of time. Gemnasium supports Ruby, NPM (JavaScript), PHP, Python, and Bower (JavaScript). Another unique offering from Gemnasium is its Slack integration. Users are notified through Slack in real time as soon as an advisory is detected.

Philippe Lafoucrière from Gemnasium indicated that future plans include an enterprise version of Gemnasium. The new version would run on clients’ premises with more languages supported, starting with Java.

SRC:CLR

Source Clear is a commercial tool with a couple of interesting attributes. It has its own database, which leverages the NIST NVD. It also retrieves vulnerability information from mailing lists and several other sources.

It offers a ton of plugins to several IDEs, deployment systems, and source repositories, as well as a command-line interface. Finally, Source Clear is using  “vulnerable methods identification.” This is a way to figure out whether a vulnerability found in a dependency is actually being used by the application. It’s a feature that dramatically reduces false positives. Additionally, it also gives developers detailed target reports for the vulnerabilities that matter. Source Clear just announced plans to offer a free version of its software.

Or Check Security Risk With Our Honorable Mentions

BlackDuck Software, Sonatype’s Nexus, and Protecode are enterprise products that offer more of an end-to-end solution for third-party components and supply chain management, including licensing, security, inventory, policy enforcement, etc.

What are your plans to check security risk for the open-source components of your codebase?

This article appeared first at teachbeacon.com

Subscribe to our newsletter for more great content on industry updates, opinion pieces & more!

Quantifying Software Security Risk

What are the frameworks out there that organizations can use to quantify risk?

Risk management is a hot topic across many boardrooms, so much so that the insurance and financial sectors have established frameworks that organizations can use to quantify risks. Across other sectors, however, organizations remain challenged with establishing how to calculate the risks that stem out of developing or using software.

When it comes to software, security cannot trump getting the product to market. Rather, using frameworks to determine potential risks not only pose a threat to enterprise security, but also can negatively impact software operations on both the customer and vendor side. Avoiding the risk all together is the best solution, but highly unlikely. Sometimes the best you can hope for is to minimize risk by trying to quantify the potential impact and degree of risk to software projects and products.

Several folks have put forth frameworks for evaluating risks through the software lifecycle, though there are no established industry standards. Key to any risk assessment strategy, though, is first identifying the likelihood of a vulnerability being discovered and also understanding the impact of that discovery.

In order to reduce and respond to risk effectively, enterprises must rely on some framework to better quantify risk. Here are a few suggested frameworks for how your company can better measure their risks.

  • For those responsible for assessing and managing risk in development and operational settings, Carnegie Mellon University Software Engineering Institute (SEI) risk management framework authored by Christopher J. Alberts Audrey J. Dorofee, August 2010.
  • Designed to manage software-induced business risks, Build Security In: Risk Management Framework, is a condensed version of the Cigital RMF designed to manage software-induced business risks authored by Gary McGraw in 2005 and revised in July 2013.
  • Risk Management in Software Development, authored by Aihua Yan in November 2008, proposes a model for applying a risk management approach to software development projects.
  • For risk analysis from the point of view of the software vulnerability lifecycle, A Framework for Software Security Risk Evaluation using the Vulnerability Lifecycle and CVSS Metrics by HyunChulJoh and Yashwant K. Malaiya proposes an approach to software risk evaluation.
  • The FAIR Institute’s Value at Risk model (VAR) is a community that shares best practices and “provides information risk, cybersecurity and business executives with the standards and best practices to help organizations measure, manage and report on information risk from the business perspective.”

How to Confirm Whether You are Vulnerable to the DROWN Attack

Another OpenSSL vulnerability has been uncovered. The new vulnerability is one in yet a series found lately in the OpenSSL library, a toolkit implementing SSL v2/v3 and TLS protocols with full-strength cryptography world-wide.

The library which powers about 5.5 million websites has seen several vulnerabilities lately including a few blockbusters like Heartbleed, Shellshock and others. The new DROWN vulnerability follows the same pattern as its predecessor by getting its own website and logo here https://drownattack.com/

You are vulnerable if one or both of the following conditions is true:

1. A server in your network “enables” traffic over SSLv2
2. Another server that enables “SSLv2” shares a key with a server that does not.

At Software Secured we provide managed web application security services. We focus on continuously testing web applications against security flaws such as OWASP Top 10 and more. Our services also entail notifying clients against zero-days in 3rd party libraries used by applications. As part of this service, we started the Software Secured standard procedures to confirm any reported vulnerabilities.

The DROWN team provided a utility http://test.drownattack.com to help test whether domains are vulnerable, but we found this tool to report too many false positives. So Software Secured has documented an alternative process to confirm whether you are vulnerable to DROWN.

Here are the steps you need to follow in order to independently confirm whether you are vulnerable to the DROWN attack.

1 – You need to do the following with all your externally available services that could be communicating over SSL (e.g. Web, FTP, SMTP, etc). We assume that you have an inventory of all your public IPs. Just in case you don’t, one way to do that is using DNSRecon

dnsrecon

2 – For each IP, you need to list all the services that communicate over SSL. First, list the open ports per IP:

nmapports

3 – Ensure that you have SSLv2 supported as most openSSL distributions disable SSLv2 and SSLv3 (as they should), thanks to Dan Astor for the tip. One quick way to test is force an SSLv2 connection to the domain in question.

nosslv2

If you get this error: “unknown option -ssl2” then you don’t have SSLv2 enabled locally. This would give you false positives as your local openSSL client wouldn’t be able to negotiate an SSLv2 connection with the server even if the server has it enabled. To enable SSLv2, please follow the instructions here: http://forums.kali.org/showthread.php?98-Adding-support-for-SSLv2-for-SSLScan-and-OpenSSL-testing

4 – So to double check the results, we used SSLyze to check. Bingo, the service at this IP does support SSLv2 ciphers:

sslyze

5 – Using openSSL itself also confirms the results using the commend: openssl s_client -connect 66.6.224.76:443 -ssl2

Conclusion:

  1. Keep in mind that this vulnerability is in a protocol that was deemed problematic at least 20 years ago.
  2. This vulnerability is more problematic if one of the servers in the network supports the faulty version. This can be used to intercept traffic to other servers that aren’t supporting it.
  3. Although Software Secured found a very high ratio of false positives using the DROWN team’s check utility versus our own testing labs, it is highly recommended you don’t take any chances and test your own server.
  4. Make sure to test ALL your servers including web servers, mail servers, FTP server etc.

Update:

Some readers indicated that it is possible to exploit this vulnerability even if SSLv2 was disabled. Merely supporting SSLv2 could potentially be problematic, so I decided to clear out with the DROWN team and I sent the following email:

Nice work. I just had a quick question. In order for a server to be vulnerable, one of the following conditions must happen:

1. The server “enables” SSLv2
2. Another server that enables “SSLv2” shares a key with the server that does not.

If all the servers in a network didn’t enable SSLv2, then the vulnerability can’t be exploited, can you confirm?

And received the following reply 40 minutes after:

yes this is correct.
But note that even a single SSLv2 enabled server (running on a different
port or IP) using the same RSA key makes your server vulnerable.If you can confirm that all your servers are configured correctly to
disable sslv2, you are OK.

How to Quickly Audit Your Cryptography Usage?

Cryptography (crypto) is an important security security control  for any application.  It is essential in securing data at rest and in transit. But, how do you know your team is following solid cryptography practices? How do you know if there are gaps that need to be addressed? 

Below are three questions you can ask your team to get an idea of whether your application is properly protecting clients’ data. This is a quick cheat sheet, not an exhaustive guide on cryptography.

1. Do you  have a list of sensitive data stored in your database?  

You can’t really protect what you don’t know. There are obvious sensitive data such as user passwords, credit card data, social security numbers, etc. In addition, there are also non-obvious sensitive data such as customer names and addresses. In many countries, the combination of a person’s name and address is considered private information. Therefore, this information should be heavily protected. Typically, selling to larger organizations or to government departments requires a list of sensitive data collected by the application and their classifications. In the healthcare industry, patient records are protected and regulated in the U.S by HIPAA and in Canada by PIPEDA.

2. Are you  using the proper cryptography for each data type?

The next question is whether you are using the proper cryptography for each data type. There are two main cryptography types commonly used by developers:

a – Hashing:  This is the transformation of a string of characters into a usually shorter fixed-length value or key that represents the original string.  Hashing is used when the original form of the information is not required. This would otherwise be suitable for storing passwords. Keep in mind that attackers could use rainbow tables  so you will need to salt your passwords. Use a unique salt for each password before storing them in the database.

b – Encryption:  There  are two types of encryption. The first is symmetric encryption, which is used to encrypt data such as bank accounts, credit card numbers, etc.  Alternatively, asymmetric encryption which is used to mostly to exchange other secret data.

 

c – Message Authentication Code (MAC):  This type produces a digest of a message to ensure integrity. The way it works is pretty much like hashing except that it includes a secret key used to authenticate the message’s integrity. This is used a lot when sending data where integrity is more important than confidentiality. It is not important for others to see the message. However, it is important for this message not to be changed while in transit.

3. Are you  using the proper cryptography algorithm? 

Now that we know the sensitive data that needs to be secured, and we know which cryptography type to use. The next step is to ensure that we are using a solid algorithm for each cryptoraphy type:

a – Hashing: Anything less than SHA512 is considered weak today. MD5 has been broken several times and SHA1 has also been broken. If you are using hashing to store passwords, they must be combined with a Salt that is unique to each user.

b – Encryption:  For symmetric encryption, AES and 3-DES are considered secure to use today. For key size, the NIST 800-57 special publication has guidelines on minimum key sizes for each algorithm and how long this key size is good for. There are several well respected asymmetric algorithms out there. Probably one of the most commonly used is RSA

c – For MACs:  As this is one form of hashing, the same criteria for choosing an algorithm could also apply here.

Now, this article is not supposed to be a complete guide on auditing cryptography. However, it is intended to be a starting point on how to quickly and effectively find out gaps in your security controls designed to securely save data.

The Canadian Government Outage and Raising Profiles of Simples Attacks

The Canadian Govt was hacked! The Globe And Mail reported a few days back:

A cyberattack crashed federal government websites and e-mail for nearly two hours Wednesday – an incident that raises questions about how capable Ottawa’s computer systems are of withstanding a sustained assault on their security.

It is not news that cyber attacks are being used for more than just stealing information. Showing defiance and protesting like what Anonymous and the Syrian Army is doing every now and then is another example. Hacking is also used for intelligence gathering, spying, public humiliation, and much more.

The question is: are attackers getting better at attacking? or we getting worse at defending?

The answer is complicated but attackers are definitely getting better. Not only do they get better in terms of skills and tools. But they are also getting smarter and more sophisticated, the most interesting thing is that they are raising the profiles of the same old attacks.

There are hardly any new attacks nowadays that we never knew they existed before, maybe once in a while a new technique of old attacks. What we are mostly dealing with is same old attacks but given sophistication, complexity, or facelift. There are several things that raises the profile of an attack, from an ordinary attack to something that makes the headlines every where.

Please keep in mind that I am not discussing state sponsored attacks here because I think this is a league of its own and most organizations can’t prevent these kind of attacks.

1. The Target: “Go Big or Go Home” is definitely valid in the hacking world. The primary factor of raising the profile of an attack is the target attackers are after, or the organization the attackers exposed. The IRS attack last month was definitely a big deal. The breach that caused the identity of every single federal employee to be stolen, RSA attack was another high profile because of the target, and in this case it was the seed for the infamous encryption that is used by a lot of Government organizations as a two-factor authentication (SecurID)

2. The timing of the attack: Distributed Denial of Service is a very simple attack. Imagine tens (or hundreds) of thousands of browsers sent and kept sending requests to the exact same set of servers at exactly the same time. What is going to happen? These servers will go down for sure because they can’t handle this kind of traffic. Attackers can do that by controlling Botnets. As a matter of fact, Botnets are available for rent for as little as $200-$300 a day! Every single Federal or Provincial faces a ton of attacks every single day. What made this particular attack against the Canadian Government very visible worldwide is the timing:

A. Bill C51 just passed in the House of Commons.

B. The Federal Elections are up and coming in October of this year. Perfect timing for visibility for the Group Anonymous.

3. Combining Attacks: It is not new that attackers combine several hacking techniques in a single attack. It is typical that attackers launch a multistage attack where they first gain a foothold inside the organization probably through an unrestricted zone (marketing, DMZ, etc) and use that to elevate their foothold and gain privileges to other more restricted zones until they get to what they are looking for. It is common too that malware exploit several vulnerabilities or zero-days in a system. The best example is Mickey Mooney’s worm that hit Twitter in 2009 and leveraged a combination of Cross-site Scripting and Cross-site Request Forgery. What we are seeing more these days is the amount of zero-days used, and the speed by which zero-days are used in the wild. It was reported that attackers exploited the Drupal SQL Injection back in November 2014 8 hours after the initial disclosure, 8 hours is not enough for most organizations to take any kind of action or even get in some cases get notified about the attacks.

4. Branding of Attacks: Remember Heartbleed? it was a simple bug where the code did not check the boundaries of the buffer allocated and thid led to attacker gaining access to unencrypted server’s memory, the best explanation is found in this cartoon. Now, why did it make the headlines and why it was such a big deal? It indeed affected half a million website, but I don’t think this was the main reason. Here is my proof; Drupal suffered a massive SQL Injection attack few months after that affected about 12 Millions sites and the impact is grave where attackers could get a shell on the server which in security terms means “Game Over”, attacker wins. So as far as impact and reach the Drupal attack definitely had more impact and reach. Now ask 100 people in different departments of any organization, chances are more people will remember Heartbleed and very few will remember or even recognize the Drupal SQL Injection attack. Why? Heartbleed had a name, and a logo. It has a brand!

5.  Branding of Attackers: 10 years ago all the bad guys were labeled either hackers or blackhat. Now that hacking got mainstream, attackers are grouped either they name themselves or they are given a name. There are Hacktivists such as the Anonymous group which mainly protest issues related to free speech, human rights, or freedom of information. Syrian Electronic Army is a group that claims responsibility for defacing or otherwise compromising scores of websites that it contends spread news hostile to the Syrian government or fake news. APT (Advanced Persistent Threat) while not a group, it is a name given to a specific style of  hacking used by state sponsored attackers, but it is usually attributed to Chinese attackers.

Now, that attackers are getting better, organizations’ defences must get better too. And there are several things that organizations could go to protect themselves against these kind of attacks:

1. Threat Assessment: the very first step organizations should do is to understand who are their primary adversaries? Who is the most likely to attack your organization, what does this attacker look like, what are their capabilities, what are they looking to do? what kind of tools and skill set do they usually have? Answering these questions will focus your efforts on what should be done and more importantly what should not be done.

2. Defence in Depth: we can’t stress enough on the importance of defence in depth. When you go on a business trip, you check in to your hotel room, how many locks do you secure before you go to sleep? …….. all of them! why? one is not enough? The truth is, there is no perfect security system, but the more good security controls used the harder it is for the attacker to infiltrate your system. The harder it is for the attacker to infiltrate, the less motivated the attacker would be to attack you versus the next target. The harder it is for the attacker the more time it is going to take the attacker if they are really after you, the longer it takes them to exploit your organization, the higher the chance your intrusion detection systems will notify you and you take an action in time.

3.  Cyber Security Strategy: security is not only an IT problem. It is a business risk. Without an organization wide strategy that covers everything from Governance, architecture, design, implementation, testing, deployment and testing; security will remain an IT problem, a problem that is much bigger than just IT. Therefore, the defences will always remain short regardless of the budget thrown at it.

4. Continuous Assessment: attackers are pounding your defences on a daily basis to find holes, crack them, see what they can get and what they can do with it. Best organizations do that proactively themselves leaving nothing for attackers to find. Test your own defences, push them to the limit because if you don’t, hackers will.

Reading through the IRS Hack: Failures and Analysis

IRS has reported that  thieves stole tax information from 100,000 taxpayers, pretty disturbing news on multiple levels. The first level of disturbance is obviously that an organization like the IRS which has more information on every single citizen – probably more than the citizens know about themselves – got hacked with that magnitude of a data breach is something to worry about. The other interesting level is that the attackers already had the victims’ Social Insurance Numbers, address information, date of birth and tax filing status.

The “Get Script” feature was taken-off line temporarily after the attack, so we didn’t have a chance to look at it closely. However, several resources are available online that pretty much sums up how it works.

Several Notes on The Application’s Security Design and Architecture:

1. Authentication Failure: Good authentication requires more than one of the following: “Something You Know”, “Something You Have” and “Something You Are”. Brief description of these are found here. IRS used 4 “Something You Know” factors, beyond the Social Security Numbers; The Address, Date of Birth and Tax Filing Status don’t really add much security. Apparently, the attackers didn’t even have to guess as they already knew this information. The question here is if they already knew what they are stealing, so what’s the point? This question will be answered later with more to come in the analysis section

2. Lack of Proper Attack Resistance Capabilities: Attack resistance is the application’s capability of preventing the attacker from executing an attack against it. To be fair; very few applications get this right and most organization outsource this to perimeter security and intrusion detection tools in particular. It was reported that IRS had several IP filters preventing requests after several “hundreds” of requests have been made from the same IP. A problem with this technique is that it does not work because attackers can randomize their IPs. While IPs are still important in detecting automated attacks, timing and geolocation are also important factors.

3. Using Predictable Data in Authentication is Bad: Using the Social Security Number as part of the authentication process is a big no-no in sound  security architecture practices for several reasons: first, the possible valid social security numbers are limited, there are several websites that explain the structure of a valid Social Security Number and others that will help you generate valid ones. So generating valid ones is extremely simple. Now, most probably the attackers didn’t go that route and they got this data from somewhere else. Nonetheless, using a predictable data in the authentication process is definitely a bad idea. Additionally, having the user to enter their Social Security Number in there could raise a bunch of other issues such as: caching, shoulder surfing and more.

Why Did They Do It?

Again, the big question is: if the attackers already had the victim’s Social Security Number, Data-of-birth, address and filing status, then what’s the point? what else could they gain from this hack? The answer is a lot…

1. Better Price for the Data: It turns out that a Social Security Number is only worth something ($3 – $5 each According to CNNMoney) if it was packaged with a Full Name. Assuming the attackers stole the PII (SSN, DoB, and Street info) but they didn’t have the victim’s names, full address and marital status, guess where they would find  this information? In the Transcripts, the image below shows a sample transcript which shows all the data the attackers would need to make a good sale of about $300,000 – $500,000.

2. Fraud: IRS reported the following:

“In all, about 200,000 attempts were made from questionable email domains, with more than 100,000 of those attempts successfully clearing authentication hurdles,”

This is a 50% success ratio (100K success out of 200k trials), so could this have been a data cleansing operation? This information was  stolen from somewhere but it didn’t have enough information to perform a full scale fraud operation. IRS paid identity thieves $5.2 billion in 2011 alone and according to The Chicago Tribune:

“The agency is still determining how many fraudulent tax refunds were claimed this year using information from the stolen transcripts. Koskinen provided a preliminary estimate, saying less than $50 million was successfully claimed.”

3. Identity Theft:
With almost all the information a thief needs to steal someone’s identity; nothing stops them now from  launching identity theft attacks. Adding insult to injury; according to the 5th annual study conducted by the Ponemon Institute, a Michigan-based research center, medical identity theft rose sharply in 2014, with almost half a million reported incidents. The compromise of healthcare records often results in costly billing disputes. Ofcorse regular identity theft is also a possibility but the high success ratio (100,000 successful attempts out of 200,000) suggests that the data used was fresh which brings to mind the Anthem Health Insurance Hack.

The bottom line is it does not have to be like this; these attacks could be avoidable with simple or with our more  comprehensive  security approaches. The billions lost to attackers and fraudsters could be put to other good use with only a fraction of that amount spent on security in the first place rather than covering up and dealing with the damage.

Simplified Application Security Code Review

Obviously it is not 2005 anymore. 10 years ago most organizations were OK with perimeter security and a vulnerability scanner. This shift started to happen in the U.S from perimeter security to application security about 4-6 years ago depending on the industry, I know some industry verticals have not moved yet but most did. In Canada this shift started to happen about 12-18 month ago. A lot of our clients come back to us asking how they can implement a security code review process internally so that we can catch the obvious vulnerabilities before going to production.
Here are the things that security code review does best:

  • Systematic approach to uncover security flaws.
  • Close to 100% code coverage.
  • Better at finding design flaws.
  • Find all instances of a certain vulnerability.
  • The only way to find certain types of vulnerabilities.

The following are the main aspects of a simplified security code review process:

1 – Anchor:

You have to exercise regularly in order to take advantage of the full benefits of exercise. Similarly, code review must be anchored on a routine task and one of the best approaches to cement security code review into your SDLC is to anchor it on the nightly build, it could also be anchored on a different SDLC phase. The diagram below shows different security code review touch-points, which could be added to your SDLC. Each touch-point has pros and cons:

  • At the development phase: you catch problems early on, the work is distributed evenly among the team members and improvements are fed into the lifecycle early on, which saves development cycles. The drawback is that it is a little bit tedious and requires a tremendous amount of commitment and discipline from the development team.
  • At the testing phase: vulnerabilities can still be caught early on, and still could be distributed evenly among team members. However, it is not as cost saving as when it is done in the development phase. It could also be a little disturbing to the quality assurance efforts.
  • At the deployment phase: now catching vulnerabilities becomes a little bit more costly to the development team since this is so close to deployment. The only good thing is that it could be a big push by the team; most teams bring in external help at this point.
security code review touchpoints in the SDLC
Security Code Review Touch-Points in the SDLC

 

2 – Automation:

Which basically means using a static code analysis tool. The biggest disadvantage of using static code analysis tool is false positives but unless you are looking to read every line, you will need to leverage static code analysis tools.

Now choosing a static code analysis tool could be a little bit daunting. Here are a few resources to help you out:

3 – Manual Review:

Tools are not very good at understanding logic, and consequently finding logic problems. Tools are also not very good at finding problems with certain functionalities such as authorization bypass or parameter tampering. That’s why you will need to get your hands dirty from time to time. The following are usually the areas that require manual review:

  • Authentication & Authorization Modules
  • File Upload/Download Modules
  • Encryption Modules
  • Security Controls and Input Filters
  • Business Logic

4 – Reporting:

You need a way to circle the security bugs found into the software development stream of bugs. The key is to figure out a proper priority scheme, not all security bugs are “critical”. The other key point is to include enough information in the bug to make it as easy as possible for the developer to fix it.

These are the simple steps to kick off your internal security code review. The key while starting out is to start simple, if you can just review your code for SQL Injection only, that’s great! Once this is taken care of, move on to the next one and so on.

Security code review could be as simple or complex as you make it, keep in mind that complex does not necessarily mean better. So keep it simple. 🙂

Cyber Security Laws & Regulations in Canada

Cybersecurity Laws & Regulations Canada

Pop quiz, do Canadians and Americans approach cyber security the same way? The answer is a clear and definite no. With the recent passage of HB 1078 in Washington State (see: here), it seemed appropriate to compare the legal attitudes between Canada’s Parliament and the American Senate. The resulting difference might surprise you.To start, Canada still lags legislatively when it comes to information security. To date, 47 different states, D.C., Guam, Puerto Rico and the Virgin Islands have legislations requiring mandatory notifications of data breaches involving personally identifiable information (for the full list, see here).

Compared to 51 regions requiring mandatory disclosure in the US, Canada has 3 provinces that has similar legislative requirement (Alberta, British Columbia, Quebec), with various levels of security requirements for different industries throughout the Confederation. Altogether, Canada lacks the same legal framework when it comes to information security.

So, what does this mean if you’re a business operating in Canada? To answer exactly how Canadian law impacts security and privacy this post will briefly look at the Canadian legal landscape.

Laws to Lookout For:

Within Canada there are three general (and broad) forms of law that regulate security and privacy in Canada: 

1. The federal PIPEDA.

2.The provincial variation of PIPEDA in Alberta.

3.Various health information acts.

Below the three different forms of legal regulations are summarized in point form.

PIPEDA

  • A federal law that regulates and enforces privacy policy on both public and private organizations, except in cases where there is a provincial equivalent that meets the same minimum standard as PIPEDA.
  • The acronym PIPEDA stands for Personal Information Protection and Electronic Documents Act.
  • Criticized for a lack of enforceability as there is a lack of mandatory disclosure or any penalty for offending parties.
  • Possible amendment with Bill S-4, Digital Privacy Act, which would introduce mandatory disclosures of data breaches and information leaks.
pipeda

Albertan PIPA

  • While there are other provincial forms of PIPEDA, the Albertan Personal Information Protection Act (PIPA) is different from the rest, including PIPEDA, in that it goes beyond the minimum standard by mandating organizations to take measures to protect data and introducing mandatory disclosure of data breaches and information leaks.

Health Information Protection Act

  • Legislations that protect private health information. Only three provinces have privacy legislations that are similar to PIPEDA in regards to health information (Ontario, New Brunswick, Newfoundland).
  • These legislations require mandatory reporting of data breaches. Learn more about preventing data breaches.

PCI and Ecommerce

[For a detailed updated version of PCI standards visit this article.]

Aside from legal obligations, businesses need to also focus on industry regulations that affect privacy and data security requirements. The most common and well known of these regulations are the standards set by Payment Card Industry Data Security Standard (PCI DSS). This PCI compliance standard applies to all merchants that processes, stores, or transmits credit card information, and sets a security standard for businesses and their virtual environment.

There are four distinct levels, with each level having progressively more stringent requirements. For a table of requirements please see here. For each successful data breach, the compromised merchant is escalated to a higher validation standard and will be required to adhere to the new minimum requirement. Want to know if you application is secure, learn more about our penetration testing.

pci-compliance

Last Words

For businesses operating in Canada, information security is a must, like any other businesses operating elsewhere. While data breach notifications are not mandatory (except in Alberta and Ontario, New Brunswick, Newfoundland for health information), this may change with the possible passing of Digital Privacy Act, and with PCI compliance being a must to conduct business online, information security is vital, especially in the US.

That being said, the main difference that arises between the US and Canada, when it comes to cyber security, is the proactive stance on consumer protection and information security. Wait for part 2 for a quick scan of America’s legal landscape.

A checklist for Canadian Cybr Security Standards. Are You Compliant?

Do you have all your security blindspots covered? We can help!