AI field focused on systems that recognize, interpret, and simulate human emotions.
Affective computing is a subfield of artificial intelligence and human-computer interaction concerned with building systems that can detect, interpret, and respond to human emotional states. The field rests on the premise that emotion is not peripheral to intelligence but central to it — that machines capable of recognizing and appropriately responding to affect will be fundamentally more useful and natural to interact with. Rosalind Picard's 1997 book Affective Computing formalized the discipline and remains its foundational text, establishing the theoretical and engineering agenda that researchers have pursued ever since.
The technical machinery of affective computing draws on a wide range of sensing modalities and machine learning methods. Facial action coding systems analyze muscle movements captured by cameras to infer emotional states; speech processing models extract prosodic features like pitch, tempo, and energy to detect frustration, joy, or distress; physiological sensors measure galvanic skin response, heart rate variability, and EEG signals as correlates of arousal and valence; and natural language processing pipelines perform sentiment and emotion analysis on text. Modern deep learning architectures — particularly convolutional networks for visual input and transformers for language — have substantially improved the accuracy and robustness of these recognition systems, enabling real-time emotion inference across noisy, real-world conditions.
The practical stakes of affective computing are significant and span multiple domains. In mental health, affect-aware systems can monitor patients for signs of depression or anxiety between clinical visits. In education, adaptive learning platforms can detect student frustration or disengagement and adjust pacing or content accordingly. In automotive safety, driver monitoring systems use affective signals to detect drowsiness or distraction. Customer service and social robotics applications use emotional feedback to make interactions feel more responsive and personalized. At the same time, the field raises serious ethical questions around consent, surveillance, and the reliability of emotion inference across cultures and individuals — concerns that have become increasingly prominent as these systems move from research labs into commercial deployment.