Full Description
Scope
Multimodal Conversation (MPAI-MMC) specifies: 1. Data Formats for analysis of text, speech, and other non-verbal components as used in human-machine and machine-machine conversation applications. 2. Use Cases implemented in the AI Framework using Data Formats from MPAI-MMC and other MPAI standards and providing recognized applications in the Multimodal Conversation domain. This Technical Specification includes the following Use Cases: 1. Conversation with Personal Status (CPS), enabling conversation and question answering with a machine able to extract the inner state of the entity it is conversing with and showing itself as a speaking digital human able to express a Personal Status. By adding or removing minor components to this general Use Case, five Use Cases are spawned: 2.Conversation About a Scene (CAS) where a human converses with a machine pointing at the objects scattered in a room and displaying Personal Status in their speech, face, and gestures while the machine responds displaying its Personal Status in speech, face, and gesture. 3.Virtual Secretary for Video conference (VSV) where an avatar not representing a human in a virtual avatar-based video conference extracts Personal Status from Text, Speech, Face, and Gestures, displays a summary of what other avatars say, and receives and act on comments. 4.Human-Connected Autonomous Vehicle Interaction” (HCI) where humans converse with a machine displaying Personal Status after having been properly identified by the machine with their speech and face in outdoor and indoor conditions while the machine responds by displaying its Personal Status in speech, face, and gesture. 5.Conversation with Emotion (CWE), supporting audio-visual conversation with a machine impersonated by a synthetic voice and an animated face. 6.Multimodal Question Answering (MQA), supporting request for information about a displayed object. 7.Three Uses Cases supporting text and speech translation applications. In each Use Case, users can specify whether speech or text is used as input and, if it is speech, whether their speech features are preserved in the interpreted speech: 7.1.Unidirectional Speech Translation (UST). 7.2.Bidirectional Speech Translation (BST). 7.3.One-to-Many Speech Translation (MST). 8.The “Personal Status Extraction Composite AIMs that estimates the Personal Status Conveyed by Text, Speech, Face, and Gesture – of a real or digital human. Abstract
New IEEE Standard - Active - Draft.