Motor Learning and Control

We believe that embodiment is an inseparable part of intelligence, determining its interaction capabilities with the physical world and its ability to effect meaningful change involving real atoms, not just virtual bits. Furthermore, we believe that computational and embodied aspects of artificial intelligence can not be studied in isolation, as they inform and enable each other. We are particularly interested in the intersection of machine learning, reinforcement learning, optimal control, and mechanism design for achieving complex motor skills involving contact with the physical world.

Selected publications and resources

  • T. Chen. "On the Interplay between Mechanical and Computational Intelligence in Robot Hands", Columbia University Doctoral Dissertation, 2021 [pdf]
  • T. Chen*, Z. He* and M. Ciocarlie. "Hardware as Policy: Mechanical and Computational Co-Optimization using Deep Reinforcement Learning", Conference on Robot Learning, 2020 (*joint first authors) [arXivpaper webpage, 5-minute CoRL presentation video]
  • E. HanniganB. SongG. KhandateM. Haas-HegerJ. Yin and M. Ciocarlie. "Automatic Snake Gait Generation Using Model Predictive Control", IEEE Intl. Conference on Robotics and Automation, 2020 [arXivvideo]
  • T. Chen and M. Ciocarlie. “Grasping Unknown Objects with Proprioception Using a Series-Elastic-Actuated Gripper”, IEEE/RSJ International Conference on Intelligent Robots and Systems, 2018 (in press) [arXivvideo]