Data classification: A better way to prioritize stored files

A primer on technology that lets managers sort data by its value

For all the time that storage managers spend worrying about safeguarding their organization's data, many of them know little about the objects of their obsession. They usually know how much total data there is -- too much -- and how much available storage capacity they have -- never enough. But they often don't know which data is critical, which is less important and which belongs in the trash.

Helping storage managers recognize those differences and do something about them is the idea behind several new products now reaching the market. The latest arrival, Kazeon Systems' Information Server IS1200, started shipping last month. It joins about a half-dozen other data-classification tools now on the market.

Such tools would not have attracted much interest a few years ago when enterprise storage infrastructures were still mostly one-horse towns. All data, regardless of value, went in the one big, expensive storage bucket.

But now storage managers can more easily build tiered storage systems of varying cost and performance, thanks to products like Serial Advanced Technology Attachment-based disk arrays. Those systems are less expensive than traditional enterprise arrays. With new storage networking gear, they can move data around more efficiently and across greater distances than ever before.

Those new classification tools can help managers sort their data according to its value to the organization, among other criteria. Then managers can place the data in an appropriate storage tier.

"We see data classification as being the foundation for a range of applications that address governance, information discovery and information life cycle management," said Troy Toman, vice president of marketing at Kazeon. "This is what we think is driving people to want more visibility into their content."

Consider the problem

Most storage managers know little about most of their electronic content. Industry estimates peg the portion of unstructured information that a typical organization owns at about 80 percent of the total. Unlike the structured data that goes into a single database management system, unstructured information is stored in a variety of file formats, such as e-mail messages, word processing documents, spreadsheets, electronic images, presentations and so on.

Because practically any employee can create this unstructured content using a wide range of applications, storage managers typically deal with the glut of new data by essentially not dealing with it at all. They simply dump all the files into the organization's primary storage systems and give all of it the same five-star treatment. An important e-mail message from one executive to another and an MP3 music file downloaded by a summer intern have the same safeguards -- regular backup and long-term retention.

"If you've got data that's 10 or 20 years old, should you keep backing it up and using up storage space? That all costs money," said Buzz Walker, vice president of marketing and business development at Arkivio, which sells classification software.

Not knowing what's inside those files can create security risks because an organization could inadvertently mishandle sensitive or confidential material.

Consider solutions

The data-classification products now on the market vary in capability and solve different problems, but they share some common characteristics, said Brad O'Neill, senior analyst and consultant at the Taneja Group.

All the products provide a first level of classification by reading and recording metadata about each file in the system. This data includes information such as file type, name, size, date created and date last accessed. The products also go to an organization's network directory service to gather information about the user who owns the file.

Storage managers can use this information to make more intelligent decisions about how to handle different file types, depending on whether their goal is lowering overall costs through the use of a tiered storage architecture, improving security or some other purpose. For example, duplicate files or certain file types, such as MP3s, can be tagged for deletion. Managers can direct critical files to primary storage and less important or infrequently accessed files to less expensive, second-tier storage.

All the products provide mechanisms for locating files and letting administrators and users access them. Arkivio's auto-stor software, for example, provides such features.

Some products add another level of classification by analyzing files' text content and not just their metadata wrappers. Using a built-in search engine, products such as Kazeon's Information Server IS1200 create a searchable index of all the content they manage.

Such indexes can help users find and retrieve files that might have been moved off a primary storage system to an archival system. But they can also be useful to administrators for keeping tabs on sensitive information.

"You can create extraction rules that tell our software to look for patterns, such as project names or a sequence of numbers that look like a Social Security number," Toman said. Storage managers can then flag files that contain certain information for storage on more secure devices, or they can track files to ensure information discovery or policy compliance.

Most of the products allow administrators to launch the metadata indexing at preset intervals. Performing the operation immediately before a scheduled backup is a reasonable practice. It is a good time to eliminate redundant or undesirable files and decrease the time needed for the backup, Walker said.

Another option that some vendors offer is the ability to index files as they are created. For example, Njini's njiniEngine software captures file information at a file's origin and assigns policy parameters, such as storage location and retention period, to the file.

Consider purchase and deployment

Most classification vendors are still in the early stages of finding customers. Arkivio's government customers include Denver and the National Nuclear Security Administration. Kazeon officials plan to use the company's close relationship with storage system vendor Network Appliance to tap the larger firm's already established federal sales office.

Prospective buyers surveying the market can narrow the already small field of vendors by knowing the type of information management problem they are trying to solve. For example, they should determine whether they need to manage storage costs, policy compliance or information security, O'Neill said.

"It's a small enough category today that I encourage end users to put all these vendors through their paces," O'Neill said. "It'll really come down to just three or four vendors who focus on what you want to do."

Potential buyers will also notice many marketing and product-integration relationships between the classification vendors and big storage system companies. Some of those deals will likely turn into outright acquisitions.

Many of the bigger companies have not developed their own classification technology, even though classification is an important component of an information life cycle management storage strategy, which most of them are pushing, O'Neill said.

Even with the possibility of acquisitions, he said, buyers should not be concerned about being locked into a larger vendor's platform later on. The nature of the classification technology is such that it has to be heterogeneous across multiple platforms, he said.

"All the independent [classification] vendors have taken this approach," O'Neill said. "It would be very shortsighted to just give that up."


**********

Companies in the market

Here are eight companies that offer data-classification storage solutions.

  • Arkivio.
  • BridgeHead Software.
  • Index Engines.
  • Kazeon Systems.
  • Njini.
  • Scentric.
  • StoredIQ.
  • Trusted Edge.

-- John Zyskowski

Getting started using data storage classification

What you will need:

To get the most out of data discovery and classification solutions, it is best to implement them in a tiered storage environment, experts say.

The most important class of data will go in the top tier. Primary disk storage comprising high-performance, high-cost Fibre Channel disks, for example, are in the top tier. Less important data would be stored in the second tier, such as lower- performance and lower-cost disk systems including Serial Advanced Technology Attachment arrays. Data that is infrequently accessed but still retained might be stored on another tier consisting of an archival media such as off-line tape.

Classification solutions are sold as hardware appliances preloaded with software or as software that must be loaded to a server that you already own. For simply discovering files on servers on a local-area or private network, the classification products work "as is" right out of the box. The products might need some tweaking to work with servers on a wide-area network.

What it will cost:

  • Arkivio's auto-stor starts at $4,000 per terabyte of data managed. Prices drop as the volume of data increases.
  • Kazeon's Information Server IS1200 costs $50,000. Volume discounts are available. One appliance can manage 20 million to 25 million documents.
  • Njini's njiniEngine lists for $70,000. The companion njini-Encount software, which manages files according to user-defined policies, starts at $15,000.

-- John Zyskowski