Speech Technology Magazine

Next IT Launches Engagement APIs

Engagement APIs extend Next IT's Alme digital assistant to any device or endpoint.
Posted Nov 20, 2015
Page1 of 1
Bookmark and Share

Next IT today released new Engagement APIs that make it possible to run its intelligent virtual assistants (IVAs) on any device or endpoint.

These APIs extend Next IT's flagship Alme platform, giving users more control and configuration flexibility.

The highlight of Next IT's Engagement APIs are the new Conversation API package, which makes it possible for Alme's Natural Language Understanding technology to process any input (text, voice, tap, or click) on any device. Conversation can now go anywhere it needs to be to engage customers. The new APIs also add functionality for configuration and media handling.

The Alme Engagement API features fthe following four APIs:

  • Conversation API: The Conversation API exposes methods for sending requests to the Alme platform for natural language understanding. This allows for input -> response interactions, whether it be user text input, Unit requests, or AppEvents, such as button presses, Web page navigation, and other non-text user actions that affect context.
  • Conversation Support API: The Conversation Support API exposes methods for retrieving the cached conversation history (inputs and responses) for user sessions. This allows for collection and display of the current conversation to the user in the event of a page refresh or navigation.
  • Configuration API: The Configuration API exposes methods for retrieving externally available configuration settings for use by the client. The settings can be segregated by channel, allowing for a Web client to share a set of common settings with a mobile client, while also having access to settings that are unique to each client channel.
  • Media Retrieval API: The Media Retrieval API exposes methods for requesting resources from the Alme platform, provided by custom media handlers. This allows for the Alme platform to support such features as pre-recorded voice files or text-to-speech (TTS), as well as any custom implementations, such as serving images to the UI that are provided with an FPML set.

"Good assistants answer your questions, but great assistants answer them wherever and whenever you need them," said Tracy Malingo, executive vice president of product and delivery at Next IT, in a statement. "Our APIs change the game for modern enterprises seeking to better engage customers and support their workforces. It's critical to deploy assistants to the unique endpoints that matter most to any given business, and we make that possible."

"The era of one-size-fits-all assistants has come to a close," said Rick Collins, president of Next IT Enterprise, in a statement. "Today's enterprise IT environment is characterized by constant change and myriad end-user needs. IVAs have always excelled at delivering a personalized experience, but we're now folding that dexterity into all parts of the product. From configuration to deployment and mobile to wearables, we can deliver results."


Page1 of 1