The SCEAS System
Navigation Menu

Conferences in DBLP

Int. Conf. on Multimodal Interfaces (ICMI) (icmi)
2009 (conf/icmi/2009)


  1. Living better with robots. [Citation Graph (, )][DBLP]


  2. Discovering group nonverbal conversational patterns with topics. [Citation Graph (, )][DBLP]


  3. Agreement detection in multiparty conversation. [Citation Graph (, )][DBLP]


  4. Multimodal floor control shift detection. [Citation Graph (, )][DBLP]


  5. Static vs. dynamic modeling of human nonverbal behavior from multiple cues and modalities. [Citation Graph (, )][DBLP]


  6. Dialog in the open world: platform and applications. [Citation Graph (, )][DBLP]


  7. Towards adapting fantasy, curiosity and challenge in multimodal dialogue systems for preschoolers. [Citation Graph (, )][DBLP]


  8. Building multimodal applications with EMMA. [Citation Graph (, )][DBLP]


  9. A speaker diarization method based on the probabilistic fusion of audio-visual location information. [Citation Graph (, )][DBLP]


  10. Dynamic robot autonomy: investigating the effects of robot decision-making in a human-robot team task. [Citation Graph (, )][DBLP]


  11. A speech mashup framework for multimodal mobile services. [Citation Graph (, )][DBLP]


  12. Detecting, tracking and interacting with people in a public space. [Citation Graph (, )][DBLP]


  13. Cache-based language model adaptation using visual attention for ASR in meeting scenarios. [Citation Graph (, )][DBLP]


  14. Multimodal end-of-turn prediction in multi-party meetings. [Citation Graph (, )][DBLP]


  15. Recognizing communicative facial expressions for discovering interpersonal emotions in group meetings. [Citation Graph (, )][DBLP]


  16. Classification of patient case discussions through analysis of vocalisation graphs. [Citation Graph (, )][DBLP]


  17. Learning from preferences and selected multimodal features of players. [Citation Graph (, )][DBLP]


  18. Detecting user engagement with a robot companion using task and social interaction-based features. [Citation Graph (, )][DBLP]


  19. Multi-modal features for real-time detection of human-robot interaction categories. [Citation Graph (, )][DBLP]


  20. Modeling culturally authentic style shifting with virtual peers. [Citation Graph (, )][DBLP]


  21. Between linguistic attention and gaze fixations inmultimodal conversational interfaces. [Citation Graph (, )][DBLP]


  22. Head-up interaction: can we break our addiction to the screen and keyboard? [Citation Graph (, )][DBLP]


  23. Fusion engines for multimodal input: a survey. [Citation Graph (, )][DBLP]


  24. A fusion framework for multimodal interactive applications. [Citation Graph (, )][DBLP]


  25. Benchmarking fusion engines of multimodal interactive systems. [Citation Graph (, )][DBLP]


  26. Temporal aspects of CARE-based multimodal fusion: from a fusion mechanism to composition components and WoZ components. [Citation Graph (, )][DBLP]


  27. Formal description techniques to support the design, construction and evaluation of fusion engines for sure (safe, usable, reliable and evolvable) multimodal interfaces. [Citation Graph (, )][DBLP]


  28. Multimodal inference for driver-vehicle interaction. [Citation Graph (, )][DBLP]


  29. Multimodal integration of natural gaze behavior for intention recognition during object manipulation. [Citation Graph (, )][DBLP]


  30. Salience in the generation of multimodal referring acts. [Citation Graph (, )][DBLP]


  31. Communicative gestures in coreference identification in multiparty meetings. [Citation Graph (, )][DBLP]


  32. Realtime meeting analysis and 3D meeting viewer based on omnidirectional multimodal sensors. [Citation Graph (, )][DBLP]


  33. Guiding hand: a teaching tool for handwriting. [Citation Graph (, )][DBLP]


  34. A multimedia retrieval system using speech input. [Citation Graph (, )][DBLP]


  35. Navigation with a passive brain based interface. [Citation Graph (, )][DBLP]


  36. A multimodal predictive-interactive application for computer assisted transcription and translation. [Citation Graph (, )][DBLP]


  37. Multi-modal communication system. [Citation Graph (, )][DBLP]


  38. HephaisTK: a toolkit for rapid prototyping of multimodal interfaces. [Citation Graph (, )][DBLP]


  39. State, : an assisted document transcription system. [Citation Graph (, )][DBLP]


  40. Demonstration: first steps in emotional expression of the humanoid robot Nao. [Citation Graph (, )][DBLP]


  41. WiiNote: multimodal application facilitating multi-user photo annotation activity. [Citation Graph (, )][DBLP]


  42. Are gesture-based interfaces the future of human computer interaction? [Citation Graph (, )][DBLP]


  43. Providing expressive eye movement to virtual agents. [Citation Graph (, )][DBLP]


  44. Mediated attention with multimodal augmented reality. [Citation Graph (, )][DBLP]


  45. Grounding spatial prepositions for video search. [Citation Graph (, )][DBLP]


  46. Multi-modal and multi-camera attention in smart environments. [Citation Graph (, )][DBLP]


  47. RVDT: a design space for multiple input devices, multipleviews and multiple display surfaces combination. [Citation Graph (, )][DBLP]


  48. Learning and predicting multimodal daily life patterns from cell phones. [Citation Graph (, )][DBLP]


  49. Visual based picking supported by context awareness: comparing picking performance using paper-based lists versus lists presented on a head mounted display with contextual support. [Citation Graph (, )][DBLP]


  50. Adaptation from partially supervised handwritten text transcriptions. [Citation Graph (, )][DBLP]


  51. Recognizing events with temporal random forests. [Citation Graph (, )][DBLP]


  52. Activity-aware ECG-based patient authentication for remote health monitoring. [Citation Graph (, )][DBLP]


  53. GaZIR: gaze-based zooming interface for image retrieval. [Citation Graph (, )][DBLP]


  54. Voice key board: multimodal indic text input. [Citation Graph (, )][DBLP]


  55. Evaluating the effect of temporal parameters for vibrotactile saltatory patterns. [Citation Graph (, )][DBLP]


  56. Mapping information to audio and tactile icons. [Citation Graph (, )][DBLP]


  57. Augmented reality target finding based on tactile cues. [Citation Graph (, )][DBLP]


  58. Speaker change detection with privacy-preserving audio cues. [Citation Graph (, )][DBLP]


  59. MirrorTrack: tracking with reflection - comparison with top-down approach. [Citation Graph (, )][DBLP]


  60. A framework for continuous multimodal sign language recognition. [Citation Graph (, )][DBLP]

NOTICE1
System may not be available sometimes or not working properly, since it is still in development with continuous upgrades
NOTICE2
The rankings that are presented on this page should NOT be considered as formal since the citation info is incomplete in DBLP
 
System created by asidirop@csd.auth.gr [http://users.auth.gr/~asidirop/] © 2002
for Data Engineering Laboratory, Department of Informatics, Aristotle University © 2002