Play it safe on the interactive Web

Testing new features, sanitizing XML and securing development can help reduce risks from Web 2.0.

The arrival of Web 2.0 tools on the government scene has created unprecedented forms of collaboration and communication, both inside agencies and between government and citizens. But that’s not the only change the tools have brought. Blogs, wikis, social-networking sites, Really Simple Syndication feeds, mashups and other Web 2.0 tools also add complexity and introduce risk to traditional information technology environments. Experts cite user-generated content, ample use of Extensible Markup Language and the ability to quickly combine data from a range of sources as attributes of Web 2.0 that could pose security issues. From an organizational perspective, Web 2.0 also threatens to upset the hierarchy of how information flows through organizations. However, security watchers differ on just how much risk this highly interactive iteration of Web technology creates. Some experts say they believe Web 2.0 harbors the same array of security vulnerabilities as the previous technology generation but also presents a few new twists.“Because these applications and tools are much more functionally interactive and allow content to grow at an exponential rate, there is greater vulnerability to application-layer attacks,” said Deborah Snyder, information security officer at New York state’s Office of Temporary and Disability Assistance (OTDA).Others, however, maintain that Web 2.0 technology might actually reduce risk exposure, at least when hosted and deployed within an enterprise.“It’s a little bit of a red herring,” said Art Fritzson, a vice president at Booz Allen Hamilton, of the Web 2.0 security concern. For example, an enterprise social-networking site for employees keeps communication in-house and reduces the chances of an external, malware-bearing e-mail message infecting an organization, he said.However, government and industry executives believe some aspects of Web 2.0 technology call for special handling. Here are some guidelines on how to stay on the safe side.Externally facing Web 2.0 sites and applications, in particular, call for an extra measure of security. Newly launched initiatives should be isolated from an organization’s IT assets.Virginia launched an online community (www.ideas.virginia.gov) in September that allows constituents to submit ideas for boosting government performance. That site operates separately from Virginia’s main state portal, which links to enterprise systems such as motor vehicle licensing. “In Ideas, there will never be a direct relationship to core service IT systems,” said Aneesh Chopra, Virginia’s secretary of technology. “It’s on its own server.”Keeping such applications out of the network lets the state strike a balance between innovation and security, Chopra said. In addition, state policy allows a three-month testing period in which officials can launch and evaluate new applications without subjecting them to the state’s security rules and regulations. That approach lets the state rapidly deploy Web 2.0 features to residents, Chopra said. However, the systems must be brought into compliance after the three-month window expires.Jason Reed, principal consultant at SystemExperts, said it makes sense to grow Web 2.0 capabilities in a segmented area before unleashing them into a production setting. “There should be a time of skepticism and isolation,” he said. He added that many organizations view Web 2.0 as just an extension of a Web server and immediately integrate RSS feeds, for example, into an existing production environment. But he said he recommends that organizations take the time to test the new Web 2.0 features and build security measures around them. Once those control points are in place, Web 2. 0 elements can be made part of the mainstream application offerings, he added.Web 2.0 relies heavily on XML. For example, RSS feeds consist of XML-formatted files. Developers frequently use Asynchronous JavaScript and XML (AJAX) to build mashups and other interactive Web 2.0 applications. Such applications can generate XML traffic as requests for data, and responses flow between browser and server. “Those applications are serving up XML in a new way,” Reed said.As a consequence, Web 2.0 technology is subject to threats such as XML poisoning, in which an attacker uses the nesting inherent in XML to create documents with thousands of data elements. Reed said such recursion in an XML document can cause denial-of-service problems. In addition, AJAX-based mashups and rich Internet applications also present a security risk because they cull data from far-flung sources. “With a mashup, we really don’t see as much of what is going on under the covers,” Reed said. As Web 2.0 applications consume data from unknown sources, they might encounter malicious code lurking in a given Web application. If a user’s browser executes that code, the result is a cross-site scripting attack that could compromise data retained on the browser.As for addressing the various vulnerabilities, the diversity of Web 2.0 components — from Java and JavaServer Page to JavaScript and XML — puts security technologies to the test. “From a source code analysis standpoint, the most important factors are understanding all of the languages involved in the application,” Snyder said. Another necessity is understanding the behavior of the frameworks on which Web 2.0 applications are constructed. For AJAX frameworks, she cited Google Web Toolkit and Direct Web Remoting, and for Web services frameworks, she cited Apache Axis. Penetration-testing tools and runtime protections, such as application firewalls, also face a particularly stiff challenge, Snyder said.Penetration-testing tools, which probe for vulnerabilities in applications, were built around HTTP as a single testing interface. Those tools, Snyder said, now must confront the task of “exercising an unknown application, with an unknown interface, that speaks an unknown multitude of protocols,” Snyder said. As a consequence, the current generation of Web security testing tools “must do a huge amount of work just to figure out what the application is, much less how to test it effectively,” she added.Snyder said the picture is similar for runtime protections because firewalls and systems that detect and prevent intrusions were built around a single protocol or a small group of protocols. Reed said the emerging category of XML appliances can help organizations protect their Web 2.0 efforts. Those appliances, sometimes called XML gateways, look for harmful elements in XML traffic.Web 2.0 thrives on user-contributed content. But with input coming from a variety of sources, should government employees believe what they read? Dennis Hayes, chief technology officer for the Navy Marine Corps Intranet program at EDS, said Web 2.0 applications have the same data integrity and confidentiality concerns of other technologies, but Web 2.0 adds a new wrinkle. “You have the additional problem that people don’t trust, at the outset, contributions made by people they don’t know,” Hayes said. Users will have to address information trustworthiness and quality. Hayes said EDS has been studying how to incorporate Web 2.0 tools in the Navy’s enterprise portal. Po ential pitfalls include inaccurate content or information that is purposely misleading. The reliability of data is another consideration. An organization’s mashup, for example, might include an externally created application that provides map coordinates. But because the application was built outside the organization, the design characteristics of that system are unknown, Hayes said. That lack of knowledge calls the fidelity of the map coordinates into question, he added.Trustworthiness of information ranks as an important issue in the intelligence community, where Web 2.0 technology promises to shake up the traditional chain of command. Doug Chabot, principal solutions architect at QinetiQ North America’s Mission Solutions Group, said Web 2.0 technology has caused a shift from a command-and-control paradigm to a mesh paradigm, in which the flow of information might bypass central points of control.That flow benefits the intelligence mission by enabling intelligence analysts to rapidly publish raw information that can be shared among peers, Chabot said. But it also removes potential checks and balances, thereby affecting what he referred to as data pedigree.However, agencies can establish checks within peer-driven social networks. Chabot said agencies now enhance social-networking technologies with capabilities that ensure an appropriate level of data review, verification, and validation. Those technologies provide “a means to ensure a review by two or more parties” who consider such factors as source and method, factual accuracy, and legality, he added. Security is often an afterthought when deploying new technologies, such as Web 2.0.“For the most part, security is not very well embedded,” said Chris Hoff, chief security architect at Unisys’ Systems and Technology division. “It’s almost always bolted on. Oftentimes, [organizations] rush to deploy technology because of its utility and then realize…it’s not secure.”The rapid adoption of new technology means that an application might not be appropriately analyzed for the exposure and risk associated with deployment,  Hoff said. Organizations should take the time to test a Web 2.0 technology before introducing it. Those who might provide input include application developers, enterprise architects, security staff, and compliance, audit and risk management personnel, Hoff said. The legal and human resources departments might also want to be involved. Instead of retrofitting security into Web 2.0 applications, agencies can build security into systems from the beginning. OTDA implemented a Secure System Development Life Cycle road map to “help ensure information securing is kept in focus” throughout development, Snyder said. The process is based on National Institute of Standards and Technology guidelines.Snyder said the use of code reviews and application security scanning and testing tools — including Fortify Source Code Analyzer and IBM AppScan — became key features of the risk-analysis process. Overall, the process includes security planning, threat and risk assessment, and testing and vulnerability scanning activities. Those elements “will help the agency identify and mitigate the risks associated with Web 2.0 technologies” and other emerging technologies, Snyder said.

FCW security series

This article is the second of a three-part series. Last week's articles gave readers a glimpse at how experts view the Bush administration’s Comprehensive National Cybersecurity Initiative and what its ramifications are for federal information technology.

Next week's article focuses on which cybersecurity systems, operations and compliance procedures agencies are outsourcing most successfully, along with tips for selecting providers and managing contractors’ performance.




















Rule 1: Isolate new ventures


















Rule 2: Keep an eye on XML and other new programming techniques

































Rule 3: Be careful whom you trust
























Rule 4: Embed security in Web 2.0 development