Adobe and Autonomy Launch Creative Use of Speech
Speech analytics vendor Autonomy joined forces with Adobe Systems, integrating its Intelligent Data Operating Layer (IDOL) platform into the newly-launched Adobe Creative Suite 4 Production Premium.
The speech analytics technology provided by Autonomy will enable automated encoding, indexing, and retrieval of audio files created with the Adobe suite, allowing users to locate audio clips based on the meaning of their content.
"Adobe and Autonomy have a strategic relationship that has been going on for a couple of years now, and we expect it to have considerable additional value to the marketplace going forward," says Stouffer Egan, CEO of Autonomy, Inc., the U.S. operations of U.K-based Autonomy Corp. "We will continue to work together to combine our technology propositions for a better experience for the end market. It’s a very, very good relationship and a real win-win for each party."
Adobe Creative Suite 4 Production Premium is a toolset used for video and audio editing, still and motion graphics, visual effects, and interactive media design. With Autonomy’s speech analytics technology, users of Creative Suite 4 Production Premium will be able to search speaker information and full transcripts of audio streams in seven languages: English, Spanish, French, German, Italian, Japanese, and Korean.
The Autonomy technology will also enable the use of new Speech Search in Adobe Premiere Pro Creative Suite 4 and Adobe Soundbooth Creative Suite 4. This will enable users to index spoken dialogue directly into the IDOL platform, making it easy for them to locate particular audio clips, identify speakers, and align multiple takes of content around a script.
"The other important piece of intellectual property we’ve licensed them is our speech to text technology and that’s unique in a way in that it’s the best in class technology on the market for speaker independent, vocabulary independent transcription to feed that IDOL engine," Egan says. "Since that engine runs off of independent variables in unstructured information, what we feed that engine for indexing purposes is actually two different streams. The best we can do on speaker independent, vocabulary independent transcription, but also raw phonemes."
Egan adds: "For users…what it really does is it takes the creative professional from a situation where keeping track of assets was done exclusively on the basis of metadata and takes them on a deeper dive such that every asset is understood at the audio track level so that all spoken words are captured and factor into how a media asset is understood."
Simon Hayhurst, senior director of product management for Dynamic Media at Adobe, also praised the partnership and its benefits for users.
"[Autonomy has] been a great partner and they’re clearly best-of-class in terms of overall searching capabilities. It’s been a really good technology partnership," he says. "The challenge we see all the time is you’re dealing with hundreds or thousands of clips as you’re trying to put together a film, a documentary, a drama, and you’re ability to navigate those clips quickly to find the right part in the editing process is really important. [But] when you use the speech search technology as we’ve put it together in our editing tools, you also get out of the end result searchable video when you take it onto the Web. So there’s this magic one-two punch where you get something that’s actually a faster way of editing…But you also get something that’s much more valuable at the end because you end up with a searchable video."