In recent months, momentum has been mounting for developers to write code for their applications that is more secure. While writing secure code is vital to the security of an organization, it’s not the final word in creating applications resistant to attacks. There is much more to application security than bulletproof code.

Potential Flaws

A number of potential run-time flaws can be identified and corrected while source code is being written. Nevertheless, there may be errors in an application that can only be discovered when the application is running or under attack. Such errors may stem from code paths being taken given the current data and the current state of the application, how memory is used, how an application functions over time or even something as simple as how a program displays error messages.

When a developer reviews how their application is handling errors, they may see nothing insecure about the code and indeed, there may be nothing insecure about it. However, when the application runs and an error is produced, the message explaining the error may be creating a security risk that wasn’t apparent to a programmer concentrating on code alone.

Error messages need to negotiate narrow straights. They need to be meaningful to a user and give support staff diagnostic information needed to correct the errors. At the same time, they should not give too much information to a hacker. For example, when a user commits an error when logging into a system an error message such as “User Name Correct, Password Incorrect” is to a hacker what a Milk Bone is to a dog. This gives the hacker a valid username. Therefore, they only need to focus on cracking the password. Better yet, the attacker can use stolen credentials to attack new targets. The only piece of information needed is the username/email.

Open Source Headache

3rd party dependencies and particularly open-source continues to be a security headache. Securing native code alone also isn’t enough to protect applications. Usually, this is because developers don’t completely control all the code used by their programs. Up to 90 percent of an application can be made up of third-party components. A developer could write rock-solid secure code for their apps and still not know the security of third-party components. Many of those components contain open source code with flaws. It’s estimated that 50 percent of the world’s largest companies use applications built on open source components with vulnerabilities.

Vulnerabilities in open source components can be a real problem for developers. This is particularly true for developers of web applications. “Component vulnerabilities can cause almost any type of risk imaginable, ranging from the trivial to sophisticated malware designed to target a specific organization,” the Open Web Application Security Project noted at its website. “Components almost always run with the full privilege of the application, so flaws in any component can be serious.”

Furthermore, OWASP also highlighted that development teams don’t focus on keeping the components and libraries they use updated. “In many cases, the developers don’t even know all the components they are using, never mind their versions,” they added. “Component dependencies make things even worse.”

Some commercial software makers keep their customers apprised of recently discovered flaws and push fixes to them. In contrast, most organizations don’t actually have a reliable way of being notified of Zero Day vulnerabilities or available patches about open source components. 

Middleware and Config Vulnerabilities

Not only do apps work with vulnerable components, but may also be called on to work with middleware. Middleware is useful because it mediates network services to applications through devices like web and application servers. However, middleware can create its own security problems. This includes problems that won’t be apparent by code review alone of an application.

For example, an application that contains authentication and access privilege control problems with the middleware won’t be discovered until the application runs and interacts with the middleware. The same is true for other potential security vulnerabilities. These could lead to interception or viewing of information in a workflow or the integrity of transactions on the network.

Errors in configuration files can be another fertile area for vulnerabilities that won’t become apparent by securing code alone. A programmer can be very meticulous about security. Yet, if their application is misconfigured, it can be as vulnerable as a sloppily coded one. Moreover, the problem can be exacerbated by many configuration settings defaulting to values that introduce vulnerabilities into the application or the middleware it’s using.

Making matters worse, web application config files can be changed at any time. This is true even after an application is in production. A well-meaning administrator could open an application to attack by simply diddling a config file.

Any organization concerned with protecting its information assets needs its development teams to write secure code. But it can’t stop there. It has to continually test apps as they’re running, too.