HRI: Stimuli Making
Overview
Multimodal Joint Attention in Robot-Guided Experiences
📅 July 2022
🏛️ University of Skövde, Sweden; Örebro University, Sweden
👥 Collaborators: Vasiliki Kondyli
This project has developed a structured set of human-robot interaction stimuli for investigating joint attention mechanisms. We created and recorded naturalistic museum guidance scenarios featuring Pepper robot, focusing on multimodal cue integration (gaze, gesture, spatial behavior). These stimuli will enable future eye-tracking and event segmentation studies on how humans perceive and respond to robot-initiated joint attention.
Research Focus Areas
- Robot gaze patterns during attention initiation
- Complementary cue combinations for exhibit referencing
- Contingent response effects on engagement
- Spatial reorientations as perceptual boundaries
Methods
- Stimuli Development: Recording of scripted robot-human interaction scenarios with defined multimodal behaviors
- Future Experiments: Eye-tracking and Event segmentation tasks during stimulus viewing
Recording Setup
- Stimuli Environment: Laboratory configured as exhibition space with 3 distinct exhibits and obstacles integrated for natural interaction challenges
- Robot Configuration: Pepper robot programmed with specific behaviors:
- Referential pointing gestures
- Head-turning for joint attention initiation
- Contingent responses to human input
- Spatial reorientation between exhibits
- Recording Setup: 4 stationary camera angles + 1 mobile perspective with synchronized audio
Stimuli
- Select Human-Robot Interaction Phases:
- Initial exhibit contemplation (static attention)
- Thinking behavior with verbal contingency
- Spatial reorientation to new exhibit
- Joint attention bid with question prompt
- Obstacle navigation with intention signaling
- Closing interaction
- Multimodal Behaviors Captured:
- Pointing gestures (circular/referential)
- Head-turning for joint attention
- Contingent nodding responses
- Spatial reorientations (body rotations)
- Verbal expressions of confusion/preference
- Output:
- 10+ minutes of annotated HRI footage
- Behavior-coded interaction sequences
Applications
- Museum Robotic guidance systems: Designing engaging guide behaviors
- Robot enhanced Autism therapy: Social skills - Joint attention intervention protocols
- Social HRI benchmarks: Standardized interaction metrics
Future Directions
- Pilot eye-tracking study using developed stimuli
- Quantitative analysis of attention hotspots during joint attention bids
- Event boundary mapping relative to robot behaviors
Expected Outcomes
- Validated stimulus set for HRI joint attention research
- Annotation framework for HRI multimodal behaviors
- Foundation for investigating Joint Attention during HRI joint action bids
Collaboration Opportunities
Open to collaboration or discussion on methodology, data, or future directions. Happy to exchange ideas and explore new perspectives.