HRI 2015 Workshop on Enabling Rich, Expressive Robot Animation

Call for Abstracts and Video Submissions

Enabling Rich, Expressive Robot Animation (full day) Workshop

@ the 10th ACM/IEEE International Conference on Human-Robot Interaction

March 2nd, 2015

Portland, Oregon, USA

Important Dates

23 January 2015: Abstracts/Video submissions due

6 February 2015: Notification of acceptance

27 February 2015: Final version ready

2 March 2015: Full-day workshop


Workshop Schedule


Workshop Summary

HRI researchers and practitioners often need to generate complex, rich, expressive movement from machines to facilitate effective interaction. Techniques have included live puppeteering via Wizard-of-Oz setups, sympathetic interfaces, or custom control software. Often, animation is accomplished by playing back pre-rendered movement sequences generated by offline animators, puppeteers, or actors providing input to motion capture systems. Roboticists have also explored realtime parametric animation, affected motion planning, mechanical motion design, or blends of offline and live methods. Generating robot animation is not always straightforward and can be time consuming, costly, or even counter-productive when human-robot interaction breaks down due to inadequate animation. There is a need to compare various approaches to animating robots, identifying when particular techniques are most appropriate, and highlighting opportunities for exploration and tool-building.


Topics include:

  • What is a useful taxonomy for describing generation of robot animation?
  • What are the assumptions, advantages, and constraints of techniques for offline (i.e. pre-rendered) or live generation of robot animation?
  • What are existing tools for animating robots, and what new tools could be built? Are there existing frameworks that could be extended to enable easier generation of animation?
  • How do the morphologies and other physical constraints of robots impact robot animation?
  • What techniques and lessons can roboticists co-opt from computer animation, animatronics, and game development for real time animation of robots?

This workshop intends to engage multidisciplinary researchers and practitioners in dialog new techniques and tools to enable rich, expressive animation in robots.

Agenda & Keynote Speakers


  • Ian Ingram (Artist, Andrew W. Mellon Art+Environment Artist-In-Residence, Pitzer College) Having Breath.

We are finalizing the day’s agenda and will post updates to as they become available.  If you are still interested in submitting a video abstract, please contact us at




We welcome submissions from a broad range of engineers, scientists and artists working with robot animation in HRI, social robotics, entertainment, and art.

We invite video submissions in lieu of position papers. Videos should not exceed six minutes including titles and credits. The video must be accompanied by brief abstract (200 words). All submitted material will undergo peer review by the program committee and external experts as appropriate. The author(s) of an accepted video will be given the opportunity to introduce the video with short presentation at the workshop, or the video may be screened on its own. The content of the video submission should be self-contained and self-explanatory.

To submit your video, please upload it to either YouTube or Vimeo and send the link (private or password protected if appropriate) along with a brief abstract to


  • Elizabeth Jochum (BA Wellesley College; MA, PhD University of Colorado) is an Assistant Professor of Art and Technology at Aalborg University (Denmark) and the co-founder of Robot Culture and Aesthetics (ROCA) research group at the University of Copenhagen. Her research focuses on the intersection of robotics, art and performance.
  • David Nuñez is a Research Specialist in the Opera of the Future group at the MIT Media Lab, working on new techniques for expressive motion generation and designing robot systems for mobilized objects and environments in performance and experiential art. Previously, he was in the Personal Robots Group at the Media Lab (est. MAS completion Feb 2015), and received a B.A. in Computer Science from Rice University.