Research Priorities
Continuous speech recognition / emotion recognition, acoustics and intelligent dialog management
- SIRI, Alexa and Co .: speech recognition under natural conditions
- Signals in real environments: noise reduction, source separation /localization, beamforming, compression quality preservation (mpg, ...)
- Dialogues with Machines: Intelligent dialogue strategies using prosodic language features and dialog histories
- Emotions and user states: Emotion recognition from language and other user features, employed for better dialogues
- Several users: situation and environment modeling, speaker identification
Big and Small Data, Deep Architectures
- Much information? -> Information fusion with machine learning
- Supervised and semi-supervised learning
- No data for your domain? -> Translational Learning, Adaptation Architectures, Synthetic Data
- Too much data? -> modality-controlled and semi-supervised annotations
- Find time dependence with recurrent (deep) neural networks
- Biological Dynamic Artificial Neural Networks
Mobile Systems, Safe Cars, Labviews and Raspberries, Robot Controls, Smart Companions
- Ambient Assisted Living: Assistance in the home with multimodal sensors
- Labview robotics control platform (speech-driven): National Instruments industry standard
- Small footprints: Dialogue controls for mobile applications with Raspberry Pi
- Recognize user states and emotions -> safe driving through customized assistance in the car
- Smart everywhere: Assistance systems as companions
- Recognizing user intentions, proactive system action: Intentional Anticipatory Interactive Systems