The SCEAS System
Navigation Menu

Search the dblp DataBase

Title:
Author:

Yoshinori Kuno: [Publications] [Author Rank by year] [Co-authors] [Prefers] [Cites] [Cited by]

Publications of Author

  1. Kang-Hyun Jo, Yoshinori Kuno, Yoshiaki Shirai
    Context-Based Recognition of Manipulative Hand Gestures for Human Computer Interaction. [Citation Graph (0, 0)][DBLP]
    ACCV (2), 1998, pp:368-375 [Conf]
  2. Nobutaka Shimada, Yoshiaki Shirai, Yoshinori Kuno, Jun Miura
    3-D Pose Estimation and Model Refinement of an Articulated Object from a Monocular Image Sequence. [Citation Graph (0, 0)][DBLP]
    ACCV (1), 1998, pp:672-679 [Conf]
  3. Nobutaka Shimada, Yoshiaki Shirai, Yoshinori Kuno
    Model Adaptation and Posture Estimation of Moving Articulated Object Using Monocular Camera. [Citation Graph (0, 0)][DBLP]
    AMDO, 2000, pp:159-172 [Conf]
  4. Zaliyana Mohd Hanafiah, Chizu Yamazaki, Akio Nakamura, Yoshinori Kuno
    Human-robot speech interface understanding inexplicit utterances using vision. [Citation Graph (0, 0)][DBLP]
    CHI Extended Abstracts, 2004, pp:1321-1324 [Conf]
  5. Yoshinori Kuno, Tomoyuki Ishiyama, Satoru Nakanishi, Yoshiaki Shirai
    Combining Observations of Intentional and Unintentional Behaviors for Human-Computer Interaction. [Citation Graph (0, 0)][DBLP]
    CHI, 1999, pp:238-245 [Conf]
  6. Yoshinori Kuno, Dai Miyauchi, Akio Nakamura
    Robotic method of taking the initiative in eye contact. [Citation Graph (0, 0)][DBLP]
    CHI Extended Abstracts, 2005, pp:1577-1580 [Conf]
  7. Yoshinori Kuno, Akio Nakamura
    Robotic wheelchair looking at all people. [Citation Graph (0, 0)][DBLP]
    CHI Extended Abstracts, 2003, pp:1008-1009 [Conf]
  8. Dai Miyauchi, Arihiro Sakurai, Akio Nakamura, Yoshinori Kuno
    Active eye contact for human-robot communication. [Citation Graph (0, 0)][DBLP]
    CHI Extended Abstracts, 2004, pp:1099-1102 [Conf]
  9. Akio Nakamura, Sou Tabata, Tomoya Ueda, Shinichiro Kiyofuji, Yoshinori Kuno
    Multimodal presentation method for a dance training system. [Citation Graph (0, 0)][DBLP]
    CHI Extended Abstracts, 2005, pp:1685-1688 [Conf]
  10. Yoshinori Kuno, Kazuhisa Sadazuka, Michie Kawashima, Keiichi Yamazaki, Akiko Yamazaki, Hideaki Kuzuoka
    Museum guide robot based on sociological interaction analysis. [Citation Graph (0, 0)][DBLP]
    CHI, 2007, pp:1191-1194 [Conf]
  11. Rahmadi Kurnia, M. Altab Hossain, Yoshinori Kuno
    Use of Spatial Reference Systems in Interactive Object Recognition. [Citation Graph (0, 0)][DBLP]
    CRV, 2006, pp:62- [Conf]
  12. Kang-Hyun Jo, Yoshinori Kuno, Yasuhiro Shirai
    Manipulative Hand Gesture Recognition Using Task Knowledge for Human Computer Interaction. [Citation Graph (0, 0)][DBLP]
    FG, 1998, pp:468-473 [Conf]
  13. Nobutaka Shimada, Yoshiaki Shirai, Yoshinori Kuno, Jun Miura
    Hand Gesture Estimation and Model Refinement Using Monocular Camera - Ambiguity Limitation by Inequality Constraints. [Citation Graph (0, 0)][DBLP]
    FG, 1998, pp:268-273 [Conf]
  14. Akihiko Iketani, Yoshinori Kuno, Nobutaka Shimada, Yoshiaki Shirai
    Image Analysis for Video Surveillance Based on Spatial Regularization of a Statistical Model-Based Change Detection. [Citation Graph (0, 0)][DBLP]
    ICIAP, 1999, pp:1108-1111 [Conf]
  15. Akihiko Iketani, Yoshinori Kuno, Nobutaka Shimada, Yoshiaki Shirai
    Real-Time Surveillance System Detecting Persons in Complex Scenes. [Citation Graph (0, 0)][DBLP]
    ICIAP, 1999, pp:1112-1115 [Conf]
  16. Terence Chek Hion Heng, Yoshinori Kuno, Yoshiaki Shirai
    Combination of Active Sensing and Sensor Fusion for Collision Avoidance in Mobile Robots. [Citation Graph (0, 0)][DBLP]
    ICIAP (2), 1997, pp:568-575 [Conf]
  17. Mun Ho Jeong, Yoshinori Kuno, Nobutaka Shimada, Yoshiaki Shirai
    Recognition of Shape-Changing Hand Gestures Based on Switching Linear Model. [Citation Graph (0, 0)][DBLP]
    ICIAP, 2001, pp:14-19 [Conf]
  18. Yoshinori Kuno, Satoru Nakanishi, Teruhisa Murashima, Nobutaka Shimada, Yoshiaki Shirai
    Robotic Wheelchair Observing Its Inside and Outside. [Citation Graph (0, 0)][DBLP]
    ICIAP, 1999, pp:502-507 [Conf]
  19. Yoshinori Kuno, Teruhisa Murashima, Nobutaka Shimada, Yoshiaki Shirai
    Interactive Gesture Interface for Intelligent Wheelchairs. [Citation Graph (0, 0)][DBLP]
    IEEE International Conference on Multimedia and Expo (II), 2000, pp:789-792 [Conf]
  20. M. Altab Hossain, Rahmadi Kurnia, Akio Nakamura, Yoshinori Kuno
    Interactive vision to detect target objects for helper robots. [Citation Graph (0, 0)][DBLP]
    ICMI, 2005, pp:293-300 [Conf]
  21. Shengshien Chong, Yoshinori Kuno, Nobutaka Shimada, Yoshiaki Shirai
    Human-Robot Interface Based on Speech Understanding Assisted by Vision. [Citation Graph (0, 0)][DBLP]
    ICMI, 2000, pp:16-23 [Conf]
  22. Yoshinori Kuno, Arihiro Sakurai, Dai Miyauchi, Akio Nakamura
    Two-way eye contact between humans and robots. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:1-8 [Conf]
  23. Zaliyana Mohd Hanafiah, Chizu Yamazaki, Akio Nakamura, Yoshinori Kuno
    Understanding Inexplicit Utterances Using Vision for Helper Robots. [Citation Graph (0, 0)][DBLP]
    ICPR (4), 2004, pp:925-928 [Conf]
  24. Mun Ho Jeong, Yoshinori Kuno, Nobutaka Shimada, Yoshiaki Shirai
    Two-Hand Gesture Recognition using Coupled Switching Linear Model. [Citation Graph (0, 0)][DBLP]
    ICPR (1), 2002, pp:9-12 [Conf]
  25. Mun Ho Jeong, Yoshinori Kuno, Nobutaka Shimada, Yoshiaki Shirai
    Two-Hand Gesture Recognition Using Coupled Switching Linear Model. [Citation Graph (0, 0)][DBLP]
    ICPR (3), 2002, pp:529-532 [Conf]
  26. Yoshinori Kuno, Teruhisa Murashima, Nobutaka Shimada, Yoshiaki Shirai
    Intelligent Wheelchair Remotely Controlled by Interactive Gestures. [Citation Graph (0, 0)][DBLP]
    ICPR, 2000, pp:4672-4675 [Conf]
  27. Dai Miyauchi, Arihiro Sakurai, Akio Nakamura, Yoshinori Kuno
    Human-Robot Eye Contact through Observations and Actions. [Citation Graph (0, 0)][DBLP]
    ICPR (4), 2004, pp:392-395 [Conf]
  28. Nobutaka Shimada, Kousuke Kimura, Yoshiaki Shirai, Yoshinori Kuno
    Hand Posture Estimation by Combining 2-D Appearance-Based and 3-D Model-Based Approaches. [Citation Graph (0, 0)][DBLP]
    ICPR, 2000, pp:3709-3712 [Conf]
  29. Yoshinori Kuno, Satoru Nakanishi, Teruhisa Murashima, Nobutaka Shimada, Yoshiaki Shirai
    Robotic Wheelchair with Three Control Modes. [Citation Graph (0, 0)][DBLP]
    ICRA, 1999, pp:2590-2595 [Conf]
  30. Yoshinori Kuno, Mitsutoshi Yoshizaki, Akio Nakamura
    Vision-Speech System Becoming Efficient and Friendly through Experience. [Citation Graph (0, 0)][DBLP]
    INTERACT, 2003, pp:- [Conf]
  31. M. Altab Hossain, Rahmadi Kurnia, Yoshinori Kuno
    Geometric and Photometric Analysis for Interactively Recognizing Multicolor or Partially Occluded Objects. [Citation Graph (0, 0)][DBLP]
    ISVC, 2005, pp:134-142 [Conf]
  32. Al Mansur, M. Altab Hossain, Yoshinori Kuno
    Integration of Multiple Methods for Class and Specific Object Recognition. [Citation Graph (0, 0)][DBLP]
    ISVC (1), 2006, pp:841-849 [Conf]
  33. Rahmadi Kurnia, M. Altab Hossain, Akio Nakamura, Yoshinori Kuno
    Generation of efficient and user-friendly queries for helper robots to detect target objects. [Citation Graph (0, 0)][DBLP]
    Advanced Robotics, 2006, v:20, n:5, pp:499-517 [Journal]
  34. Yoshinori Kuno, Yasukazu Okamoto, Satoshi Okada
    Robot Vision Using a Feature Search Strategy Generated from a 3D Oobject Model. [Citation Graph (0, 0)][DBLP]
    IEEE Trans. Pattern Anal. Mach. Intell., 1991, v:13, n:10, pp:1085-1097 [Journal]
  35. Akihiko Iketani, Atsushi Nagai, Yoshinori Kuno, Yoshiaki Shirai
    Real-Time Surveillance System Detecting Persons in Complex Scenes. [Citation Graph (0, 0)][DBLP]
    Real-Time Imaging, 2001, v:7, n:5, pp:433-446 [Journal]
  36. Sakashi Maeda, Yoshinori Kuno, Yoshiaki Shirai
    Mobile robot localization based on eigenspace analysis. [Citation Graph (0, 0)][DBLP]
    Systems and Computers in Japan, 1997, v:28, n:12, pp:11-21 [Journal]
  37. Atsushi Nagai, Yoshinori Kuno, Yoshiaki Shirai
    Detection of moving objects against a changing background. [Citation Graph (0, 0)][DBLP]
    Systems and Computers in Japan, 1999, v:30, n:11, pp:107-116 [Journal]
  38. Ryuzo Okada, Yoshiaki Shirai, Jun Miura, Yoshinori Kuno
    Tracking a person with 3D motion by integrating optical flow and depth. [Citation Graph (0, 0)][DBLP]
    Systems and Computers in Japan, 2001, v:32, n:7, pp:29-38 [Journal]
  39. Al Mansur, Katsutoshi Sakata, Yoshinori Kuno
    Recognition of Household Objects by Service Robots Through Interactive and Autonomous Methods. [Citation Graph (0, 0)][DBLP]
    ISVC (2), 2007, pp:140-151 [Conf]

  40. Precision timing in human-robot interaction: coordination of head movement and utterance. [Citation Graph (, )][DBLP]


  41. Revealing gauguin: engaging visitors in robot guide's explanation in an art museum. [Citation Graph (, )][DBLP]


  42. Assisted-care robot initiation of communication in multiparty settings. [Citation Graph (, )][DBLP]


  43. Selective function of speaker gaze before and during questions: towards developing museum guide robots. [Citation Graph (, )][DBLP]


  44. Effect of restarts and pauses on achieving a state of mutual orientation between a human and a robot. [Citation Graph (, )][DBLP]


  45. Prior-to-request and request behaviors within elderly day care: Implications for developing service robots for use in multiparty settings. [Citation Graph (, )][DBLP]


  46. Spatial Relation Model for Object Recognition in Human-Robot Interaction. [Citation Graph (, )][DBLP]


  47. Smart Wheelchair Navigation Based on User's Gaze on Destination. [Citation Graph (, )][DBLP]


  48. An Integrated Method for Multiple Object Detection and Localization. [Citation Graph (, )][DBLP]


  49. Improving Recognition through Object Sub-categorization. [Citation Graph (, )][DBLP]


  50. Efficient Hypothesis Generation through Sub-categorization for Multiple Object Detection. [Citation Graph (, )][DBLP]


  51. Object Detection and Localization in Clutter Range Images Using Edge Features. [Citation Graph (, )][DBLP]


  52. Choosing answerers by observing gaze responses for museum guide robots. [Citation Graph (, )][DBLP]


  53. Object Recognition Using Environmental Cues Mentioned Explicitly or Implicitly in Speech. [Citation Graph (, )][DBLP]


  54. Skin Patch Trajectories as Scene Dynamics Descriptors. [Citation Graph (, )][DBLP]


  55. Selection of Object Recognition Methods According to the Task and Object Category. [Citation Graph (, )][DBLP]


  56. Human-Robot Interface with Appropriate Frame Selection. [Citation Graph (, )][DBLP]


  57. Object Recognition Using Conic-Based Invariants from Multiple Views. [Citation Graph (, )][DBLP]


  58. Qualitative Visual Interpretation of 3d Hand Gestures Using Motion Parallax. [Citation Graph (, )][DBLP]


  59. Advanced Vision Processor with an Overall Image Processing Unit and Multiple Local Image Processing Modules. [Citation Graph (, )][DBLP]


  60. Museum guide robot with three communication modes. [Citation Graph (, )][DBLP]


  61. Robotic wheelchair based on observations of people using integrated sensors. [Citation Graph (, )][DBLP]


  62. Effective Head Gestures for Museum Guide Robots in Interaction with Humans. [Citation Graph (, )][DBLP]


Search in 0.004secs, Finished in 0.007secs
NOTICE1
System may not be available sometimes or not working properly, since it is still in development with continuous upgrades
NOTICE2
The rankings that are presented on this page should NOT be considered as formal since the citation info is incomplete in DBLP
 
System created by asidirop@csd.auth.gr [http://users.auth.gr/~asidirop/] © 2002
for Data Engineering Laboratory, Department of Informatics, Aristotle University © 2002