We study how humans and AI systems can work together effectively in small-group settings, focusing on shared understanding, belief tracking, and collaborative problem-solving.
We explore how to detect, interpret, and model human affective and cognitive states using multimodal data, including facial expressions, physiological signals, and behavioral cues.
We investigate the use of computer vision in extended reality (XR) contexts to enhance interactivity, perception, and user modeling in virtual and augmented environments.
We explore the use of computer vision technologies in industrial settings, focusing on real-time detection, monitoring, and analysis to improve efficiency, safety, and decision-making.