Vipul Nair

Hello

I’m a Ph.D. researcher in Informatics, bridging cognitive science, AI, and human-centered technology to study how people process sensory information in real-world interactions. With 8+ years of designing eye-tracking, motion capture, and mixed-methods studies, I advance naturalistic behavior research through publications, datasets, and global collaborations — turning complex insights into intuitive solutions.

See my experience →

A Naturalistic Multimodal Human Interaction Dataset: Annotated Visuo-Auditory Cue and Attention Data
A Naturalistic Multimodal Human Interaction Dataset: Annotated Visuo-Auditory Cue and Attention Data

This naturalistic multimodal dataset captures 27 everyday interaction scenarios with detailed visuo-spatial & auditory cue annotations and eye-tracking data, offering rich insights for research in human interaction, attention modeling, and cognitive media studies.

May 1, 2025

Modeling Visual Attention in Naturalistic Human Interactions
Modeling Visual Attention in Naturalistic Human Interactions

We explore how cues like gaze, speech, and motion guide visual attention in observing everyday interactions, revealing cross-modal patterns through eye-tracking data and structured event analysis.

January 30, 2025

The Observer Lens: Characterizing Visuospatial Features in Multimodal Interactions
The Observer Lens: Characterizing Visuospatial Features in Multimodal Interactions

This doctoral thesis examines how visuospatial features both low- (e.g., kinematics) and high-level (e.g., gaze, speech) shape perception and attention in human interactions, combining cognitive science and computational modeling to inform human-centric technologies.

May 24, 2024

Kinematic Primitives in Action Similarity Judgments: A Human-Centered Computational Model
Kinematic Primitives in Action Similarity Judgments: A Human-Centered Computational Model

This study compares human and computational judgments of action similarity, revealing that both rely heavily on kinematic features like velocity and spatial cues and that humans don't rely much on action semantics.

January 30, 2023

Attentional synchrony in films: A window to visuospatial characterization of events
Attentional synchrony in films: A window to visuospatial characterization of events

This study analyzes how visuospatial features in film scenes shape attentional synchrony, using eye-tracking data to reveal how scene complexity guides visual attention during event perception.

September 22, 2022

Anticipatory Instances in Films: What Do They Tell Us About Event Understanding?
Anticipatory Instances in Films: What Do They Tell Us About Event Understanding?

This study explores how visuospatial features in film scenes—like occlusion or movement—relate to anticipatory gaze and event segmentation, using eye-tracking data and multimodal analysis to uncover patterns in human event understanding.

June 1, 2022

Event Segmentation Through the Lens of Multimodal Interaction
Event Segmentation Through the Lens of Multimodal Interaction

This research develops a conceptual cognitive model to examine how multimodal cues—like gaze, motion, and speech—influence event segmentation and prediction in narrative media, using detailed scene analysis and eye-tracking data from naturalistic movie viewing.

September 17, 2021

Action similarity judgment based on kinematic primitives
Action similarity judgment based on kinematic primitives

This study compares human judgments and a kinematics-based computational model in recognizing action similarity, revealing that both rely heavily on low-level kinematic features, with the model showing higher sensitivity but also greater bias.

October 30, 2020

The DREAM Dataset: Supporting a data-driven study of autism spectrum disorder and robot enhanced therapy
The DREAM Dataset: Supporting a data-driven study of autism spectrum disorder and robot enhanced therapy

This dataset captures 300+ hours of therapy sessions with 61 children with ASD, offering rich 3D behavioral data—including motion, gaze, and head orientation—from both robot-assisted and therapist-led interventions.

August 21, 2020

Incidental processing of biological motion: Effects of orientation, local-motion and global-form features
Incidental processing of biological motion: Effects of orientation, local-motion and global-form features

This study shows that incidentally processed biological figures influence visual perception based on structure, with upright point-light walkers with global-form features affecting processing more than inverted or scrambled PLWs with only local-motion cues.

August 30, 2018

Trust-building of Patients Relatives through an Android App-based Patient Information Tool
Trust-building of Patients Relatives through an Android App-based Patient Information Tool

This research proposes a cognitive science–informed Android app that improves transparency in Indian hospitals by providing patient relatives with real-time updates on health status, treatments, and expenses, aiming to reduce mistrust through clear, accessible communication.

June 1, 2015