Listen up

Automated speech transcription, translation systems called on for front-line duty

Since the Sept. 11 terrorist attacks, the Qatar-based al-Jazeera TV network has become a valuable information source for the United States, beaming messages from Osama bin Laden and news and opinion from the Middle East.

The 24-hour Arabic language station is part of a sea of audio information, including telephone communications and in-person interviews, that could yield clues to the terrorists' next moves. Careful analysis of that information, however, would quickly stretch the limits and language expertise of U.S. intelligence agencies and even computers designed to sift through this type of electronic data.

To bolster that effort, the Defense Advanced Research Projects Agency launched two programs this spring: Babylon, which aims to develop a portable speech-to-speech translation device, and the Effective, Affordable, Reusable Speech-to-Text (EARS) project, for turning speech recordings into searchable digital text.

Together with the Defense Department's Language and Speech Exploitation Resources (LASER) initiative, these programs will develop and deliver improved speech transcription and translation capabilities to intelligence analysts and the military.

"People are definitely looking at how we can use speech technologies" to help with intelligence gathering and other security efforts, said Alan Black, a professor at the Language Technologies Institute at Carnegie Mellon University, which is working on Babylon.

The main technologies being studied are voice recognition, or converting speech to digitized text; speech synthesis, or translating text to speech; and dialogue, which manages voice/machine interactions, Black said.

Commonly used in commercial applications, such as call-center interactive voice response systems, these speech technologies accounted for $505 million in worldwide revenues in 2001, a total that is projected to reach $2 billion by 2006, according to the Kelsey Group Inc., a Princeton, N.J.-based market research company.

The intelligence community is interested in advanced forms of the technology but is in the early stages of using them, Black said.

On another front, In-Q-Tel, the CIA's venture capital firm, which supports the commercial development of innovative technology, is working with several companies, including IBM Corp. and Microsoft Corp., both of which have voice-recognition engines that can be used to control computer systems, according to Greg Pepus, visionary solutions architect at In-Q-Tel.

Another area of interest for In-Q-Tel is technology that can search audio or video recordings, though Pepus concedes that the highly accurate, automated speech transcription on which such searching capabilities depend "is still an emerging technology."

Voice technologies were being developed before the Sept. 11 attacks, but the efforts have since been shifted to a fast track.

One Smart Wiretap

Reducing word error rates and improving foreign language speech transcription are key goals of DARPA's recently launched EARS program.

Within the first three years of the five-year program, the goal is to reduce the technology's word error rate to less than 10 percent. The error rate with current technology is 25 percent to 50 percent when transcribing conversations and 15 percent to 30 percent for broadcast material.

"These are very ambitious goals, but we have to achieve them to keep the program funded," said Elizabeth Shriberg, senior researcher at SRI International of Menlo Park, Calif., one of the researchers in the EARS program.

EARS also calls for the extraction and use of metadata to "turn a transcript into something that can be read by a government analyst," Shriberg said. This means adding punctuation and eliminating "disfluencies," when speakers "start to say something then change their minds."

Another program requirement is the ability to extend this technology to other languages, starting with Arabic and Mandarin Chinese. Each language poses unique challenges and requires a different approach, she said. For example, Arabic has many different dialects, and Mandarin is a tone

language, which can lead to many possible translations.

There are three research teams in the main EARS program. SRI and Cambridge, Mass.-based BBN Technologies each lead their own teams of international researchers; the United Kingdom's Cambridge University leads the third team.

A fourth team is not bound by the same schedule as the other three. Instead, it will explore "novel approaches" that are likely to take longer to come to fruition, said Nelson Morgan, director of the International Computer Science Institute, a member of the fourth team.

Pocket Translators

DARPA's Babylon program addresses a different aspect of voice technology. Begun in March, the three-year, $24 million program is developing handheld, speech-to-speech translators capable of mediating conversations between two people speaking different languages.

Babylon grew out of a previous Small Business Innovation Research project run by DARPA to develop a limited, one-way handheld translation device. The device created from this first program, the "phraselator," responds to English voice commands and is "essentially an electronic phrase book with fixed phrases that are mapped to a recording in a target language," said Army Lt. Col. James Bass, Babylon program manager.

The phraselator, which uses SRI technology, can translate about

1,500 English phrases into another language.

After the Sept. 11 attacks, a military version of the device — a ruggedized personal digital assistant — was rushed into production at Middletown, R.I.-based Marine Acoustics Inc. and sent for use in Afghanistan, helping soldiers communicate in Pashto, Dari Farsi and Arabic, Bass said. About 500 devices have been made.

Babylon's ultimate goal is to deliver a two-way translator. "It is sort of a bilingual phraselator," said Kristin Precoda, a program director at SRI, which is competing with BBN Technologies to build a rudimentary version of the next-generation device.

Like the original phraselator, the two-way translator will be constrained to a limited set of phrases. Questions can be asked in English, while the foreign speakers will have a limited range of answers they can say in their language for translation into English.

Another goal of Babylon is to develop translation systems capable of handling "more free-form input," said Horacio Franco, a program director at SRI.

There are four participants in this effort, each tackling one language: IBM has Mandarin Chinese; SRI has Pashto; HRL Laboratories Inc. and the University of Southern California have Dari Farsi; and Carnegie Mellon has Arabic. There is "no competition because of the specialization required and performance expectations," Bass said.

Although the system will allow for freer discussions, there will still be constraints.

"With current technology, the concept of the universal translator where you can say anything is years away," Franco said. "The closest we get to [that] today is a 'free-type' of input speech only in one domain at a time."

For example, the ability to translate English and Arabic communications used within a specific domain or situation, such as a medical examination or refugee processing, is being developed, Carnegie Mellon's Black said.

Unlike EARS, Babylon has no fixed error rate target. "It is harder to measure in this case," Precoda said. "If you get, for example, a 10 percent error rate, that can mean a lot of different things depending on precisely where the errors are. So our target is to make the users satisfied with the system."

The devices also must be able to operate in noisy atmospheres in which trucks might be going by or people might be shouting, Black said.

Another issue is distinguishing between male and female voices and conveying "the right tone in the translation," Black said. "If you are telling someone, 'Halt or I'll shoot,' the translation has to be suitable so it appears as a command with force behind it."

Those problems are among the biggest challenges for speech recognition technologies, Franco said.

But when it comes to languages, technology is not always the biggest stumbling block. For example, when translating English to Pashto, a major challenge has been just getting access to speakers who can produce data, Precoda said.

"There are a couple of grammar books," but it is not like English, which "has been studied for centuries," said Chris Culy, a senior research linguist at SRI.

And as far as technology is concerned, it will be challenging to fit a capable "vocabulary recognizer into a handheld [system] in terms of memory and compute power," Franco said.

McKenna is a freelance writer based in the San Francisco Bay area.