Secure coding

Securing the Software Development Environment

Carl J. Mueller
August 27, 2012 by
Carl J. Mueller

In the February 2012 edition of Computer, a sidebar to an article on "Web Application Vulnerabilities" asks the question: "Why don't developers use secure coding practices?" [1] The sidebar provides the typical cliches that programmers feel constrained by security practices and suggests that additional education will correct the situation. Another magical solution addressing security concerns is to introduce a secure development process. However, going from improved security education or a new secure development process requires a plan to connect the current development processes to one that is more secure, as the cartoon suggests. Instead of looking for a single solution, another approach is to identify the threat agents, threats, vulnerabilities, and exposures. After identifying these, the next step is to establish a cost-effective security policy that will provide safeguards.

Learn Secure Coding

Learn Secure Coding

Build your secure coding skills in C/C++, iOS, Java, .NET, Node.js, PHP and other languages.

Learn Secure Coding

Learn Secure Coding

Build your secure coding skills in C/C++, iOS, Java, .NET, Node.js, PHP and other languages.

Many view programmers as the primary threat agent in a development environment, however Microsoft reports that more than 50% of the security defects reported were introduced in the design of a component [2]. Microsoft's finding suggests that both designers and programmers are threat agents. According to Microsoft's data, designers and programmers introduce vulnerabilities into an application; it is therefore appropriate to identify all of the software development roles (analysts, designers, programmers, testers) as potential threat agents. Viewing software developers as threat agents should not imply that the individuals filling these roles are careless or criminal, but they have the greatest opportunity to introduce source code compromising the confidentiality, integrity, or availability of a computer system.

Software developers can expose assets accidentally by introducing a defect. Defects have many causes, such as oversight or lack of experience with a programming language, and are a normal part of the development process. Quality Assurance (QA) practices, such as inspections and unit testing, focus on eliminating defects from the delivered software. Developers can also expose assets intentionally by introducing malicious functionality [3]. Malicious functionality can take the form of a variety of attacks, such as worms, Trojans, salami fraud, and other types of attacks [4]. A salami fraud is an attack in which the perpetrators take a small amount of an asset at a time, such as the "collect the round-off" scam [5]. An individual interested in introducing illicit functionality will exploit any available vulnerability. Identifying all of the potential exposures and creating safeguards provides a significant challenge to the security analysts,but by analyzing the development process, it is possible to identify a number of cost-effective safeguards.

Addressing these exposures, many researchers recommend enhancing an organization's QA program. One frequent recommendation is to expand their inspection practice by introducing a checklist for the various exposures provided by the programming languages used by developers [2]. Items added to a security inspection checklist typically include functions such as Basic's Peek() and Poke() functions, C's string copy functions, exception handling routines, and programs executing at a privileged level [2]. Functions like Peek() and Poke() make it easier for a programmer to access memory outside of the program, but a character array or table without bounds checking produces similar results. A limitation of the language-specific inspection checklist is that each language used to develop the application must have a checklist for the language. For some web applications, this could require three or more inspection checklists, and this may not provide safeguards for all of the vulnerabilities. Static analyzers, such as the SEMATE research, being sponsored by the National Institute of Standards and Technology (NIST), is an approach automating some of the objectives associated with an inspection checklist, but static analyzers have a reputation for flagging source statements that are not actually problems [6].

Using a rigorous inspection process as a safeguard will identify many defects, but it will not adequately protect from exposures due to malicious functionality. An inspection occurring before the source code is placed under configuration control provides substantial exposure. In this situation, the developer simply adds the malicious functionality after the source code passes inspection or provides the inspection team a listing not containing the malicious functionality. Figure 1 illustrates a traditional unit-level development process containing this vulnerability.

As illustrated in Figure 1, a developer receives a changer authorization to begin in the modification or implementation of a software unit. Generally, the "authorization" is verbal, and the only record of the authorization appears on a developer's progress report or the supervisor's project plan. To assure that another developer does not update the same source component, the developer "reserves" the necessary source modules. Next, the developer modifies the source code to have the necessary features. When all of the changes are complete, the developer informs the supervisor who assembles a review panel consisting of 3 to 5 senior developers and/or designers. The panel examines the source code to evaluate the logic and documentation in the source code. A review committee can recommend that the developer make major changes to the source code that will require another review, minor changes that do not require a full review, or no changes and no further review. It is at this point in the development process where the source code is the most vulnerable to the introduction of malicious functionality, because there are no reviews or checks before the software is "checked-in".

Another limitation of inspections is that the emerging Agile methodologies recommend formal inspections. Development methodologies, such as eXtreme programming, utilizes pair-programming and Test Before Design concepts in lieu of inspections, and Scrum focuses on unit testing for defect identification [7, 8]. Using inspections as the primary safeguard from development exposures limits the cost savings promised by these new development methodologies and does not provide complete protection from a developer wishing to introduce malicious software.

Programming languages and the development process offer a number of opportunities to expose assets, but many of the tools, such as debuggers and integrated development environments, can expose an asset to unauthorized access. Many development tools operate at the same protection level as the operating system kernel and function quite nicely as a worm to deposit a root kit or other malicious software. Another potential exposure, not related to programming languages, is "production" data for testing. Using "production" data may permit access to information that the developers do not have a need to know. Only a comprehensive security policy focusing on personnel, operation, and configuration management can provide the safeguards necessary to secure an organization's assets.

Many organizations conduct background checks, credit checks, and drug tests when hiring new employees as part of their security policy. Security clearances issued by governmental agencies have specific terms; non-governmental organizations should also re-screen development personnel periodically. Some would argue that things like random drug tests and periodic security screenings are intrusive, and they are. However, developers need to understand that just as organizations use locks on doors to protect their physical property, they need to conduct periodic security screenings to protect intellectual property and financial assets from those that have the greatest access.

Another element of a robust development security policy is to have separate development and production systems. Developing software in the production environment exposes organizational assets to a number of threats, such as debugging tools or simply writing a program to gain unauthorized access to information stored on the system. Recent publicity on the STUXNET worm suggests that a robust development security policy will prohibit the use of external media, such as CD's, DVD's, and USB devices [9]. Another important point about the STUXNET worm is that it targeted a development tool, and the tool introduced the malicious functionality.

Configuration management is the traditional technique for controlling the content of deliverable components and is an essential element of a robust security policy [10]. Of the six areas of Configuration Management, the two areas having the greatest effect on security are configuration control and configuration audits. Version control tools, such as Clearcase and CVS, provide many of the features required by configuration control. A configuration audit is an inspection occurring after all work on a configuration item is complete, and it assures that all of the physical elements and process artifacts of the configuration item are in order.

Version control tools prevent two or more programmers from over-writing each other's changes. Most version control systems permit anyone with authorized access to check source code "in" and "out" without an authorized change request, and some do not even track the last access to a source module. However, in a secure environment, a version control system must integrate with the defect tracking system and record the identification of the developers who accessed a specific source module. Integrating the version control system with the defect tracking system permits only the developer assigned to make a specified change to have access to the related source code. It is also important for the version control system to track the developers that access the source. Frequently, developers copy source code from a tested component or investigate the approach used by another developer to address a specific issue and need access to read source modules that they are not maintaining. This also provides a good research tool to introduce malicious functionality into another source module. By logging source module access, security personnel can monitor access to the source code.

Configuration audits are the second management technique making a development organization more secure. Audits range in formality from a clerk using a checklist verifying that all of the artifacts required for a configuration item are submitted, to a multi-person team assuring that software delivered produces the submitted artifacts and the tests adequately address risks posed by the configuration item [11]. Some regulatory agencies require audits for safety critical applications/high reliability applications to provide an independent review of the delivered product. An audit in a high security environment addresses the need to assure that delivered software does not expose the organizational assets to risk from either defects or malicious functionality. Artifacts submitted with a configuration item can include, but are not limited to, requirements or change requests implemented, design specification, test-script(s) source code, test data, test results and the source code for the configuration item. To increase confidence that the delivered software does not contain defects or malicious functionality, auditors should assure that the test cases provides 100% coverage of the delivered source code. This is particularly important with interpreted programming languages, such as python or other scripting languages, because a defect can permit the entry of malicious code by a remote use of the software. Another approach auditors can use to assure coverage is to re-test the configuration item with the same test data to assure that the results from the re-test match those produced in the verification and validation procedure.

Adopting these recommendations for a stronger configuration management process modifies the typical unit-level development process, illustrated in Figure 1, to a more secure process illustrated in Figure 2. In the more secure process, a formal change authorization is generated by a defect tracking system or by the version control system's secure change authorization function. Next, a specified developer makes the changes required by the change authorization. After implementing and testing the changes, the developer checks all of the artifacts (source code, test drivers, and results) into the version control system. Checking the artifacts automatically triggers a configuration audit of the development artifacts. Auditors may accept the developer's changes or create a new work order for additional changes. Unlike the review panel, the auditors may re-test the software to assure adequate coverage and that the test results match those checked in with the source code. Making this change to the development process significantly reduces the exposure to accidental defects or malicious functionality because it is verifying the source code deployed in the final product with all of its supporting documentation.

Following all of these recommendations will not guarantee the security of the software development environment. T because there are always new vulnerabilities from social engineering. However, using reoccurring security checks, separating developers from production systems and data, controlling media, and using rigorous configuration management practices should make penetration of your information security perimeter more difficult. It is also necessary to conduct a periodic review of development tools and configuration management practices because threat agents will adapt to any safeguard that does not adapt to new technology.

References:

[1]    N. Antunes and M. Vieira, "Defending against Web Application Vulnerabilites," Computer, vol. 45, pp. 66-72, 2012.

[2]    N. Davis, W. Humphrey, S. T. R. Jr., G. Zibulski, and G. McGraw, "Processes for Producing Secure Software: Summary of US National Cybersecurity Summit Subgroup Report," IEEE Security and Privacy, vol. 2, pp. 18-25, 2004.

[3]    G. McGraw and G. Morrisett;, "Attacking Malicious Code: A Report to the Infosec Research Council " IEEE Softw., vol. 17, pp. 33-41, Sept.-Oct. 2000 2000.

[4]    M. E. Kabay, "A Brief History of Computer Crime: An Introduction for Students," ed, 2008.

[5]    M. E. Kaybe. (2002, Salami fraud Network World Security Newsletter. Available: http://www.networkworld.com/newsletters/sec/2002/01467137.html

[6]    (2012, Apr 14). SAMATE - Software Assurance Metrics And Tool Evaluation. Available: http://samate.nist.gov

[7]    K. Schwaber and M. Beedle, Agile Software Development with Scrum. Upper Saddle River, NJ: Prentice-Hall, Inc., 2002.

[8]    K. Beck and C. Andres, Extreme Programming Explained: Embrace Change (2nd Edition): Addison-Wesley Professional, 2004.

[9]    R. Langner, "Stuxnet: Dissecting a Cyberwarfare Weapon," IEEE SECURITY & PRIVACY, pp. 49-51, 2011.

[10]    A. Leon, A guide to software configuration management: Artech House, Inc., 2000.

Learn Secure Coding

Learn Secure Coding

Build your secure coding skills in C/C++, iOS, Java, .NET, Node.js, PHP and other languages.

Learn Secure Coding

Learn Secure Coding

Build your secure coding skills in C/C++, iOS, Java, .NET, Node.js, PHP and other languages.

[11]    N. R. Nielsen, "Computers, security, and the audit function," presented at the Proceedings of the May 19-22, 1975, national computer conference and exposition, Anaheim, California, 1975.

Carl J. Mueller
Carl J. Mueller

Dr. Mueller is an assistant professor in the Department of Computer Information Systems, TAMU-Central Texas in Killeen. He obtained the PhD-CS with research into automated software testing from Illinois Institute of Technology under the supervision of Dr. Bogdan Korel. Dr. Mueller has over 7 years of teaching experience and more than 35 years of industrial experience specializing in developing and testing safety critical and/or high reliability applications. Currently, Dr. Mueller is conducting research into software development security and authentication.