[et_pb_section admin_label=”section”][et_pb_row admin_label=”row”][et_pb_column type=”4_4″][et_pb_text admin_label=”Text” background_layout=”light” text_orientation=”left” use_border_color=”off” border_color=”#ffffff” border_style=”solid”]
In recent months, momentum has been mounting for developers to write code for their applications that is more secure. While writing secure code is vital to the security of an organization, it’s not the final word in creating applications resistant to attacks.
A number of potential run-time flaws can be identified and corrected while source code is being written. Nevertheless, there may be errors in an application that can only be discovered when the application is running or under attack. Such errors may stem from code paths being taken given the current data and the current state of the application, how memory is used, how an application functions over time or even something as simple as how a program displays error messages.
When a developer reviews how their application is handling errors, they may see nothing insecure about the code and indeed, there may be nothing insecure about it. However, when the application runs and an error is produced, the message explaining the error may be creating a security risk that wasn’t apparent to a programmer concentrating on code alone.
Error messages need to negotiate narrow straights. They need to be meaningful to a user and give support staff diagnostic information needed to correct the errors but not give too much information to a hacker. For example, when a user commits an error when logging into a system an error message such as “User Name Correct, Password Incorrect” is to a hacker what a Milk Bone is to a dog. Instead of making the attacker wonder if the credentials they’re trying to use are any good at all, they know they’ve got a valid username and need to focus on cracking the password. Better yet, the attacker can use stolen credentials to attack new targets, the only piece of information needed is the username/email.
3rd party dependencies and particularly open-source continues to be a security headache. Securing native code alone also isn’t enough to protect applications because developers don’t completely control all the code used by their programs. Up to 90 percent of an application can be made up of third-party components. A developer can write rock-solid secure code for their apps but they still don’t know how secure those third-party components are. Many of those components contain open source code with flaws. It’s estimated that 50 percent of the world’s largest companies use applications built on open source components with vulnerabilities.
Vulnerabilities in open source components can be a real problem for developers, especially developers of web applications. “Component vulnerabilities can cause almost any type of risk imaginable, ranging from the trivial to sophisticated malware designed to target a specific organization,” the Open Web Application Security Project noted at its website. “Components almost always run with the full privilege of the application, so flaws in any component can be serious.”
Making matters worse, OWAAP continued, development teams don’t focus on keeping the components and libraries they use up to date. “In many cases, the developers don’t even know all the components they are using, never mind their versions,” it added. “Component dependencies make things even worse.”
What’s more, unlike commercial software makers who keep their customers apprised of recently discovered flaws and push fixes to them, most organizations don’t have a reliable way of being notified of Zero Day vulnerabilities or available patches about open source components.
Not only do apps work with vulnerable components, but may also be called on to work with middleware. Middleware is useful because it mediates network services to applications through devices like web and application servers. However, middleware can create its own security problems, problems that won’t be apparent by code review alone of an application.
For example, if an application contains authentication and access privilege control problems with the middleware, that won’t be discovered until the application runs and interacts with the middleware. The same is true for potential security vulnerabilities that could lead to interception or viewing of information in a workflow or the integrity of transactions on the network.
Errors in configuration files can be another fertile area for vulnerabilities that won’t become apparent by securing code alone. A programmer can be very meticulous about security, but if their application is misconfigured it can be as vulnerable as a sloppily coded one. Moreover, the problem can be exacerbated by many configuration settings defaulting to values that introduce vulnerabilities into the application or the middleware it’s using.
Making matters worse, web application config files can be changed at any time — even after an application is in production. A well-meaning administrator could open an application to attack by diddling a config file.
Any organization concerned with protecting its information assets needs its development teams to write secure code, but it can’t stop there. It has to test apps as they’re running, too.