Don't cut corners on e-gov hardware (part 2)
- By Rich Kellett
- Apr 11, 2001
1. Design in parallelism at all levels when building systems. Avoid linear
The Web works because of parallelism. Some programs must reside on the
server side, such as database software, but most processing actually occurs
on the client side or at your own PC. There are literally millions of PCs
and servers all simultaneously processing information in parallel to create
this rich environment and phenomena we call the Internet.
Generally, you should separate key functions to run on their own processors
(servers). More processors (servers) will result in a faster overall design.
Use separate front-end processors (or function-specific servers) to manage
your disk farms, your communications out to PCs, your modem pool and other
devices. Having separate front-end processors each for managing device "pools"
(modems, disks, etc.) adds parallelism to your design, which then allows
the whole design to operate at higher speeds.
By "off-loading" critical management functions from the server that
runs the applications, the applications will appear to run much faster.
In addition, consider having some applications run on their own separate
servers. For instance, any intensive database activity should be run on
a server apart from those that support your office automation software.
Increase the number of channels (or paths) to your devices (PCs, disks,
modems, etc.), which is another way to create parallelism in the overall
design. Rather than one path out to your disk farm, instead have two. Favor
hierarchies for creating communication paths over "daisy-chains" or circles.
Hierarchies by nature incorporate parallelism, because each branch can process
simultaneously. A linear chain or circle must process everything sequentially.
Also, your servers and front-end processors should be linked together
on their own high-speed local-area network. This specialized LAN is in addition
to the LAN that goes out to your PCs. The server LAN should be the highest
baud rate that can be bought off the shelfdo not go cheap in this area.
The servers must talk to each other at extremely high speeds when going
to the distributed and parallel architecture suggested here.
2. To contain costs, buy hardware over software and invest in both hardware
and software over more people.
In today's software environment, a significant amount of time is consumed
"tinkering" to make different vendors' software "play well together" on
the same server. This results in an enormous and often hidden cost. Labor
hours tick away fast when integrating, and integration problems can be reduced
by simply hosting applications on their own dedicated servers.
In all cases, favor buying off-the-shelf rather then investing in more
full-time employees to create, maintain and operate your home-grown hardware
or software. If the functionality you want does not exist on the market,
I would strongly consider dropping the requested functionality.
Another reason for the emphasis on buying off-the-shelf is that the
market leverage of the federal government is not as great as many people
assume it is. The federal IT budget of around $40 billion is a small percentage
of the total national expenditure on IT. Furthermore, the federal government's
purchasing power is decentralized: It behaves more like 50 to 100 companies
rather than a single entity.
3. Identify and then design around the information-processing choke
points, using basic approximations for speed.
When designing your systems, identify the choke points simply by determining
their basic speed and conducting a comparison of the differences in speed.
You should identify the basic measure of speed for each of your components
throughout the entire design of the proposed system hardware architecture.
The critical ones are as follows:
- Servers (or front-end processors) and the number of servers.
- The LAN linking the servers and front-end processors.
- The link running out of the back of the PC to the server.
- The read/write speed for your disks.
- Memory cache.
- Modem communication rates, including telephone line speeds and wireless
data communication rates.
- Keyboard typing speed.
Next, draw a diagram of each of the components and write the speeds
identified for each of these components next to them. Obvious problems will
come to the fore immediately. Several fast servers with multiple co-processors
each running at a high speed will need to communicate to your Internet service
provider at something greater than a single 56K modem rate. You will need
one or more T-1 lines in this instance.
The choke points can be identified simply by comparing the difference
in the orders of magnitude in the numbers you have written next to each
component. For example, compare a modem speed of 56K with a server speed
of 500 MHz. You don't have to be a scientist to gain an intuitive feel about
this mismatch. Such a broad feel will give insight into how many components
are needed in each category to maximize the responsiveness of the overall
4. Incorporate significant extra capacity throughout your design, recognizing
that systems degrade exponentially and that the Internet is driving a boom
in information processing requirements.
As your server approaches capacity, responsiveness degrades dramatically.
For example, adding 10 percent more data to a processor nearing capacity
could lead to a 50 percent reduction in responsiveness. At near capacity
for any component in your design, simple tasks that occur in seconds for
the user can take hours or even time out.
With the low costs of hardware, it is senseless to have any dialogue
about "fine-tuning" the amount of hardware to buybuy as much as you can
within your budget. Cut corners somewhere else. No one likes a slow system.
So if the server allows for four co-processors, buy four co-processors.
Maximize the memory. Maximize the number of channels out to other devices
such as disks, modems and cache. Buy the fastest disk drives and fully populate
the allowable disk drives. Never underestimate the growth of the Internet.
Never underestimate the capacity requirements for true video.
Make buys based on the assumption that 50 percent capacity is the maximum.
If any component reaches capacity or near-capacity, it can cause the whole
system to begin to fail.
Also, look to the future. Estimate what you think your organization
will need in two to three years, and at least double your expectations about
what will be necessary. Video, sound and the ever-increasing size of software
will bog down even the most far-reaching thinking about capacity requirements.
As processing speed increases, the size of software applications increases
to consume the available speed. As software grows larger, it leads to the
need for faster processing speed. This never-ending cycle pushes the need
for continuously accelerating your hardware speed.