Simplifying performance monitoring tools

There's a new way to monitor information systems that promises to help agencies

make the systems work faster and more reliably. If done right, it'll probably

save them money, too.

Using one of a new breed of application monitoring tools, information

technology managers can focus on the performance of whole applications rather

than on components such as routers or servers.

This is important because at the end of the day, what usually matters

most is how well applications perform, not whether a particular router or

a switch is working at 50 or 60 percent capacity.

Although application performance monitoring is gaining adherents, many

hurdles have to be overcome before it is widely adopted. Most IT departments

are not organized to take advantage of the new tools (see sidebar). Moreover,

the tools do not yet offer as much functionality as users desire, and the

cost remains high.

Despite those issues, demand is growing. Dataquest, a San Jose, Calif.,

market research firm, expects revenue from that market segment to grow from

$2.42 billion in 1999 to $4.74 billion in 2003.

Companies selling such products include BMC Software Inc., Candle Software

Inc., Compuware Corp., Concord Communications Inc., Hewlett-Packard Co.,

IBM Corp., Lucent Technologies, MediaHouse Software Inc., NetScout Systems

Inc. and Network Associates Inc.

Early Adopters

There are several reasons why IT managers are more interested in the

performance of applications than in specific system components. For one,

network technology has matured considerably in the past few years. Increased

reliability means that networks don't have to be monitored as closely as

they once were. Also, although networks are better, they're more complex.

That makes monitoring their many components — such as local-area networks,

modems, wide-area network links, remote access concentrators, backbone switches

and more — a tricky puzzle to piece together.

The increasing importance placed on applications is evident at the world's

largest library, the Library of Congress in Washington, D.C. It holds more

than 15 million items on approximately 530 miles of bookshelves and adds

about 10,000 items to its collection daily. Visitors usually browse through

this collection by visiting the library's World Wide Web site, but there

have been times, such as when Independent Counsel Kenneth Starr's report

was issued, that high traffic volumes caused the site to fail. To ensure

that the site would always be available, the agency searched for a performance

monitoring tool for several IBM RS/6000 servers running Oracle Corp. database

management software. After examining products from BMC Software, Candle

and Compuware, the agency chose Compuware's EcoTools. "We wanted to customize

the monitoring probes running on our devices and felt it would be easiest

with EcoTools," explained Charles Shellum, computer specialist for the

Library of Congress.

Still, many organizations have only begun to look at application monitoring

issues. After deploying a new Asynchronous Transfer Mode network in the

summer of 1997, the Hubble Space Telescope Program at the Goddard Space

Flight Center in Greenbelt, Md., focused on better network monitoring tools.

"We wanted to go beyond knowing that the network operated at 75 percent

utilization to determining where bottlenecks may be arising," said Daniel

Carrick, a network design engineer at the agency. The Hubble program now

uses Concord's Network Health to deliver that information, and the tool

enabled the agency to reconfigure its systems to improve the performance

of its mail servers.

Recently, the agency began deploying BMC Software's Patrol package for

application-level monitoring. "When a user calls the help desk with a problem,

we want our technicians to understand what may be going on with his applications,"

Carrick said.

Not all agencies have had positive experiences with application-level

monitoring. The Tennessee Valley Authority in Chattanooga, Tenn., deemed

application-level monitoring tools deficient after examining them. "We found

the products could not break down information into enough detail as we wanted,"

said Darl Richey, a network management engineer at the organization. "They

could tell us how heavily our Oracle DBMS was being used but not which applications

were creating the load."

Another potential stumbling block for some agencies is cost. To gather

comprehensive performance data, agencies must place performance monitoring

software on the different devices. Software on thousands of clients, hundreds

of routers and tens of servers can quickly result in an investment of $50,000

or more.

High costs are one reason why the National Institutes of Health in Silver

Spring, Md., has limited its monitoring tools to Enterprise Monitor from

Mediahouse Software. "The agency really doesn't have the budget or the

staff to invest heavily in performance monitoring tools," said David Chestnut,

a network manager at the agency's information management services department.

Because of those hurdles, application monitoring is expected to evolve

slowly. "The products available now are better than those in the past, but

vendors are talking about complete application-level performance monitoring

products," said Stephan Elliot, an industry analyst at Dataquest's office

in Lowell, Mass. "Complete solutions are another two to five years away

from being delivered."


  • Government Innovation Awards
    Government Innovation Awards -

    Congratulations to the 2021 Rising Stars

    These early-career leaders already are having an outsized impact on government IT.

  • Acquisition
    Shutterstock ID 169474442 By Maxx-Studio

    The growing importance of GWACs

    One of the government's most popular methods for buying emerging technologies and critical IT services faces significant challenges in an ever-changing marketplace

Stay Connected