Microsoft Corporation, by any measure, is one of the largest generators of computer code on the planet. Even today, with the decline of desktop computing and the rise of LAMP (Linux Apache MySQL Perl/PHP/Python)-based cloud computing, the company maintains a desktop market share approaching 90 percent and runs almost a third of the servers on the Internet, and many more in private sector networks.
Since the late nineties, this has given hackers an enormous attack surface to exploit by focusing on Microsoft software. And in the early 2000’s, the company paid a significant price for that. Bugs like Code Red, Nimda, and MyDoom, all in Microsoft software, rocked the Internet and cost individuals and companies millions of dollars in recovery costs.
- The UC Berkeley School of Information's Online Master of Information and Cybersecurity prepares students with the technical, conceptual, and practical skills needed for a professional career in cybersecurity. Complete in 20 Months. GRE/GMAT required. Request information.
- Syracuse University's College of Engineering and Computer Science offers a Master of Science in Cybersecurity program delivered online. The program prepares students with the necessary foundations for the design and development of assured, secure computer systems in order to predict, prevent, and respond to cyber attacks. Bachelor's degree is required. Request Information.
The Software Development Lifecycle Gives Way to the Security Development Lifecycle
In February of 2002, reacting to the threats, the entire Windows division of the company was shut down. Routine coding and planning stopped. Every single developer in the division was re-tasked with one goal: find and fix security bugs in the code base.
Many vulnerabilities were patched. But the company also realized that such retroactive, disruptive initiatives were unsustainable. Something dramatic, something deeply integrated into the fabric of the development process, had to change to prevent future security epidemics.
Like most large developers at the time, Microsoft implemented a variant on what it called the Software Development Lifecycle to govern various product development efforts. The software development lifecycle described the systematic process of building complex systems that include a series of phases ranging from requirements gathering to system shutdown and disposal.
In late 2003, the company unveiled something it called, instead, the “Security Development Lifecycle.” The two SDLs were integrated seamlessly to incorporate cybersecurity considerations into the coding process itself, instead of relegating it to a set of expensive afterthoughts. And, more than simply implementing the concept internally, Microsoft took the new SDL on the road, evangelizing it to partners and other developers in its ecosystem.
Today, although Microsoft operating systems remain popular, they no longer break the top three in terms of security vulnerabilities that are detected in the wild. The integration of cybersecurity professionals into the SDL had been successful, providing a model for other developers to follow.
Time and Money Continue to Impact Security Considerations in the SDL
Software security holes are bugs are unintended aspects of program execution that can be exploited to break the expected function of the program and allow actions the user would not approve or otherwise allow.
It’s a dirty little secret in information security circles that many of today’s major compromises are a result of nothing more than greed. Not greed on the part of the hackers, though that obviously plays a role; greed on the part of the developers who slapped together a pile of barely working code that was riddled with bugs and security holes that the hackers were able to easily exploit.
Security is a distant consideration for many individual programmers, who understandably are focused on creating software that enables the core functions they envision offering joy and utility to their customers. This paradigm has existed as long as the Internet has—programmers in academia coded to solve problems in a friendly environment where door locks were unnecessary and malicious or intentionally destructive behavior almost unheard of. Since additional effort in any phase of the SDL is costly, companies are reluctant to invest in cybersecurity initiatives until forced to do so.
Failure to devote time and effort to security considerations in the development process in the service of higher profit margins is likely to continue as long as companies can get away with it, but cybersecurity professionals are pushing back against the status quo.
Leveraging the SDL for Improved Security
Though this pattern prevails, it is not inevitable and strong counter-examples exist, many of them leveraging the Software Development Lifecycle to their security advantage.
At NASA in the 1970s, when the Space Shuttle was in development, programmers faced a sobering proposition: with the task of writing the control code for a flying bomb that would be carrying seven souls into the harsh realm of outer space, a bug could be lethal.
The team responded by creating possibly the strictest, most secure software development scheme in history. By 2011, the last three versions of the software deployed, with nearly half a million lines of code in each, had no more than a single error each, never crashed, and never miscalculated.
NASA’s team had a $35 million budget to accomplish their goal, but such security does not require huge amounts of money to accomplish. OpenBSD, a free variant of the Unix system software that largely undergirds today’s Internet, is produced entirely by volunteer developers with a strict focus on security. In the nearly 20 years the operation system has been available, only two remotely exploitable security holes have ever been found.
The team accomplishes this through a thoroughly open code base and constant cycles of auditing and code review—a key phase of most SDL spiral models which is overlooked by other developers.
Cybersecurity Professionals Have a Role to Play in the SDL
Although most cybersecurity workers are not expert programmers, certain niche information security careers require deep coding knowledge and experience. White hat hackers pour through code, looking for vulnerabilities. Applications security specialists work with software development engineers to produce more secure code. Even ordinary security engineers and analysts often use some basic programming skills in order to test software they are tasked with analyzing or deploying. Considerations will include:
- Security requirements of the application use case
- The user base expected for the application
- The underlying development technologies and languages being used
- Data access that will flow through the application
- The platform on which the application will be deployed
There are variations on how different programmers approach the SDL. For some organizations, a waterfall development model puts cybersecurity considerations into play during the design and testing phases, with limited opportunities to impact the software after deployment. In other places, the adoption of agile development methodologies involves security considerations at almost every step in the rapidly iterative coding cycle, with quick detection and correction of vulnerabilities emphasized.