A human who voices AI outputs verbatim, lending the machine a physical presence.
An echoborg is a human confederate who serves as a live embodiment for an AI system, repeating its generated text verbatim through a hidden earpiece during face-to-face interactions. The setup decouples computational authorship from physical presence: the AI produces the words, but a human body delivers them, complete with natural prosody, facial expressions, and nonverbal cues. This creates a hybrid entity in which machine agency is masked behind human form, raising fundamental questions about where intelligence and social intent actually reside.
The mechanism works by streaming an AI's text output—typically from a conversational language model—to an earpiece worn by the confederate, who then speaks the words as naturally as possible to an unsuspecting interlocutor. The human intermediary contributes embodiment but not authorship, functioning as a biological loudspeaker. Researchers can manipulate variables such as response latency, disclosure timing, and conversational domain to study how these factors shape the interlocutor's experience. Crucially, the echoborg inverts the classic Wizard-of-Oz paradigm used in HCI research: rather than a human secretly simulating a machine, a machine secretly drives a human.
For AI and human-computer interaction researchers, echoborg experiments offer a uniquely ecologically valid testbed. They allow controlled study of mind attribution, social bias, and perceived competence in conditions that screen-based chat cannot replicate. When people believe they are speaking with another human, they apply richer social cognition—and discovering the machine's role afterward can dramatically shift those judgments. This makes echoborgs valuable for probing Turing Test variants in naturalistic settings and for examining how embodiment inflates or distorts perceived AI capability.
Beyond experimental utility, the echoborg concept carries significant ethical weight. It raises questions about informed consent, deception in research, and the broader societal implications of AI systems that present themselves through human proxies. As conversational AI becomes more capable and pervasive, understanding how embodiment shapes trust and attribution is increasingly important for designing transparent, accountable human-AI interaction. The echoborg framework provides a structured lens through which these concerns can be empirically investigated.