How choreography robots can help live

Consider this scene from the 2014 film, Ex Machina: A young nerd, Caleb, is in a dim room with a barely dressed femmebot, Kyoko. Nathan, a brilliant robot, gets drunk and tells Caleb brutally to dance with the Kyoko bot. To get things going, Nathan pushes a wall panel and the room lights suddenly shift to an ominous red, while Oliver Cheatham’s disco classic “Get Down Saturday Night” starts playing. Kyoko – who apparently has done this before – starts dancing wordlessly, and Nathan joins his robot creation in an intricate choreographed bit of pelvic thrust. The scene suggests that Nathan imbued his robot creation with disco functionality, but how did he choreograph the dance on Kyoko, and why?

Ex Machina may not answer these questions, but the scene does point to a evolving field of robotics research: choreography. Definitely, choreography is making decisions about how bodies move through space and time. In a danceable sense, choreography is about articulating movement patterns for a given context, which is usually optimal for expressiveness rather than utility. To be focused on the choreography of the world is to notice how people move and interact within complex, technology-laden environments. Choreo-robotics (i.e. robotics who work choreographically) believe that the inclusion of danceable gestures in mechanical behavior will make robots look less like industrial constructions, and instead more lively, more empathetic and attentive. Such interdisciplinary intervention can make robots easier to work with – no small feat, given their increase in consumer, medical and military contexts.

Although concerns about the movement of bodies are central to both dance and robotics, the disciplines rarely overlapped. On the one hand, it is known that the Western dance tradition maintains a generally anti-intellectual tradition that presents great challenges for those interested in interdisciplinary research. George Balanchine, the famous founder of the New York City Ballet, famously told his dancers, “Do not think, darling, do.” As a result of this kind of culture, the stereotype of dancers as serviceable bodies better seen than heard has unfortunately calcified long ago. Meanwhile, the field of computer science – and robotics in the expansion – shows similar, if different, body issues. As sociologists Simone Browne, Ruha Benjamin, and others have shown, there is a long history of emerging technologies that cast human bodies as mere objects of surveillance and speculation. The result was the perpetuation of racist, pseudo-scientific practices such as phrenology, mind-reading software and AIs pretending to know if you’m gay by your face. The body is a problem for computer scientists; and the overwhelming response by the field were technical “solutions” trying to read bodies without meaningful feedback from their owners. That is, an insistence that corpses be seen but not heard.

Despite the historical divide, it is perhaps not too much to consider robotics as choreographers of a specialized kind, and to think that the integration of choreography and robotics can benefit both fields. Usually the movement of robots is not studied for meaning and intentionality as it is for dancers, but robotics and choreographers are engaged in the same fundamental concerns: wording, expansion, power, form, effort, effort, and power. “Robotists and choreographers aim to do the same thing: to understand and convey subtle choices in motion within a given context,” writes Amy Laviers, a certified motion analyst and founder of the Robotics, Automation and Dance (RAD) ) Lab in a recent National Science-funded paper. When robotics work choreographically to determine robotic behavior, they make decisions about how human and inhuman bodies move expressively in the intimate context of each other. This is different from the usage parameters that mostly regulate most robotics research, where optimization has the highest point (does the robot do its job?), And what the movement of a device means or makes someone feel is of no clear consequence .

Madeline Gannon, founder of the research studio AtonAton, leads the field in her exploration of robotic expressiveness. Her World Economic Forum – commissioned installation, Manus, illustrates the possibilities of choreo-robotics, both in its brilliant choreographic consideration and in the achievements of innovative mechanical engineering. The piece consists of ten robotic arms displayed behind a transparent panel, each taut and brilliantly illuminated. The arms are reminiscent of the production design of technodistopic films such as Ghost in the shell. Such robotic weapons are designed to perform repetitive work, and are commonly used for aspects of use such as painting car chassis. Yet when Manus activated, the robotic arms do not contain any of the expected, repetitive rhythms of the assembly line, but appear alive and move each separately to communicate animatedly with its environment. Depth sensors installed at the bottom of the robot’s platform track the movement of human observers through space, measure distances, and respond to it iteratively. This tracking data is distributed across the robot system and functions as a shared view for all the robots. If passers-by move sufficiently close to any robot arm, it will “look” closer by tilting its “head” in the direction of the stimuli and then moving closer to engage. Such simple, subtle gestures have been used by puppet players for millennia to apply objects with animus. Here it has the cumulative effect of making Manus looks curious and very lively. These small choreographies give the appearance of personality and intelligence. This is the functional difference between a random series of industrial robots and the coordinated movements of intelligent packing behavior.

.Source