The SCEAS System
Navigation Menu

Conferences in DBLP

Int. Conf. on Multimodal Interfaces (ICMI) (icmi)
2007 (conf/icmi/2007)


  1. Interfacing life: a year in the life of a research lab. [Citation Graph (, )][DBLP]


  2. The great challenge of multimodal interfacestowards symbiosis of human and robots. [Citation Graph (, )][DBLP]


  3. Just in time learning: implementing principles of multimodal processing and learning for education. [Citation Graph (, )][DBLP]


  4. The painful face: pain expression recognition using active appearance models. [Citation Graph (, )][DBLP]


  5. Faces of pain: automated measurement of spontaneousallfacial expressions of genuine and posed pain. [Citation Graph (, )][DBLP]


  6. Visual inference of human emotion and behaviour. [Citation Graph (, )][DBLP]


  7. Audiovisual recognition of spontaneous interest within conversations. [Citation Graph (, )][DBLP]


  8. How to distinguish posed from spontaneous smiles using geometric features. [Citation Graph (, )][DBLP]


  9. Eliciting, capturing and tagging spontaneous facialaffect in autism spectrum disorder. [Citation Graph (, )][DBLP]


  10. Statistical segmentation and recognition of fingertip trajectories for a gesture interface. [Citation Graph (, )][DBLP]


  11. A tactile language for intuitive human-robot communication. [Citation Graph (, )][DBLP]


  12. Simultaneous prediction of dialog acts and address types in three-party conversations. [Citation Graph (, )][DBLP]


  13. Developing and analyzing intuitive modes for interactive object modeling. [Citation Graph (, )][DBLP]


  14. Extraction of important interactions in medical interviewsusing nonverbal information. [Citation Graph (, )][DBLP]


  15. Towards smart meeting: enabling technologies and a real-world application. [Citation Graph (, )][DBLP]


  16. Multimodalcues for addressee-hood in triadic communication with a human information retrieval agent. [Citation Graph (, )][DBLP]


  17. The effect of input mode on inactivity and interaction times of multimodal systems. [Citation Graph (, )][DBLP]


  18. Positional mapping: keyboard mapping based on characters writing positions for mobile devices. [Citation Graph (, )][DBLP]


  19. Five-key text input using rhythmic mappings. [Citation Graph (, )][DBLP]


  20. Toward content-aware multimodal tagging of personal photo collections. [Citation Graph (, )][DBLP]


  21. A survey of affect recognition methods: audio, visual and spontaneous expressions. [Citation Graph (, )][DBLP]


  22. Real-time expression cloning using appearance models. [Citation Graph (, )][DBLP]


  23. Gaze-communicative behavior of stuffed-toy robot with joint attention and eye contact based on ambient gaze-tracking. [Citation Graph (, )][DBLP]


  24. Map navigation with mobile devices: virtual versus physical movement with and without visual context. [Citation Graph (, )][DBLP]


  25. Can you talk or only touch-talk: A VoIP-based phone feature for quick, quiet, and private communication. [Citation Graph (, )][DBLP]


  26. Designing audio and tactile crossmodal icons for mobile devices. [Citation Graph (, )][DBLP]


  27. A study on the scalability of non-preferred hand mode manipulation. [Citation Graph (, )][DBLP]


  28. Voicepen: augmenting pen input with simultaneous non-linguisitic vocalization. [Citation Graph (, )][DBLP]


  29. A large-scale behavior corpus including multi-angle video data for observing infants' long-term developmental processes. [Citation Graph (, )][DBLP]


  30. The micole architecture: multimodal support for inclusion of visually impaired children. [Citation Graph (, )][DBLP]


  31. Interfaces for musical activities and interfaces for musicians are not the same: the case for codes, a web-based environment for cooperative music prototyping. [Citation Graph (, )][DBLP]


  32. Totalrecall: visualization and semi-automatic annotation of very large audio-visual corpora. [Citation Graph (, )][DBLP]


  33. Extensible middleware framework for multimodal interfaces in distributed environments. [Citation Graph (, )][DBLP]


  34. Temporal filtering of visual speech for audio-visual speech recognition in acoustically and visually challenging environments. [Citation Graph (, )][DBLP]


  35. Reciprocal attentive communication in remote meeting with a humanoid robot. [Citation Graph (, )][DBLP]


  36. Password management using doodles. [Citation Graph (, )][DBLP]


  37. A computational model for spatial expression resolution. [Citation Graph (, )][DBLP]


  38. Disambiguating speech commands using physical context. [Citation Graph (, )][DBLP]


  39. Automatic inference of cross-modal nonverbal interactions in multiparty conversations: "who responds to whom, when, and how?" from gaze, head gestures, and utterances. [Citation Graph (, )][DBLP]


  40. Influencing social dynamics in meetings through a peripheral display. [Citation Graph (, )][DBLP]


  41. Using the influence model to recognize functional roles in meetings. [Citation Graph (, )][DBLP]


  42. User impressions of a stuffed doll robot's facing direction in animation systems. [Citation Graph (, )][DBLP]


  43. Speech-driven embodied entrainment character system with hand motion input in mobile environment. [Citation Graph (, )][DBLP]


  44. Natural multimodal dialogue systems: a configurable dialogue and presentation strategies component. [Citation Graph (, )][DBLP]


  45. Modeling human interaction resources to support the design of wearable multimodal systems. [Citation Graph (, )][DBLP]


  46. Speech-filtered bubble ray: improving target acquisition on display walls. [Citation Graph (, )][DBLP]


  47. Using pen input features as indices of cognitive load. [Citation Graph (, )][DBLP]


  48. Automated generation of non-verbal behavior for virtual embodied characters. [Citation Graph (, )][DBLP]


  49. Detecting communication errors from visual cues during the system's conversational turn. [Citation Graph (, )][DBLP]


  50. Multimodal interaction analysis in a smart house. [Citation Graph (, )][DBLP]


  51. A multi-modal mobile device for learning japanese kanji characters through mnemonic stories. [Citation Graph (, )][DBLP]


  52. 3d augmented mirror: a multimodal interface for string instrument learning and teaching with gesture support. [Citation Graph (, )][DBLP]


  53. Interest estimation based on dynamic bayesian networks for visual attentive presentation agents. [Citation Graph (, )][DBLP]


  54. On-line multi-modal speaker diarization. [Citation Graph (, )][DBLP]


  55. Presentation sensei: a presentation training system using speech and image processing. [Citation Graph (, )][DBLP]


  56. The world of mushrooms: human-computer interaction prototype systems for ambient intelligence. [Citation Graph (, )][DBLP]


  57. Evaluation of haptically augmented touchscreen gui elements under cognitive load. [Citation Graph (, )][DBLP]


  58. Multimodal interfaces in semantic interaction. [Citation Graph (, )][DBLP]


  59. Workshop on tagging, mining and retrieval of human related activity information. [Citation Graph (, )][DBLP]


  60. Workshop on massive datasets. [Citation Graph (, )][DBLP]

NOTICE1
System may not be available sometimes or not working properly, since it is still in development with continuous upgrades
NOTICE2
The rankings that are presented on this page should NOT be considered as formal since the citation info is incomplete in DBLP
 
System created by asidirop@csd.auth.gr [http://users.auth.gr/~asidirop/] © 2002
for Data Engineering Laboratory, Department of Informatics, Aristotle University © 2002