Robots as Minions, Sidekicks, and Apprentices:
Using Wearable Muscle, Brain, and Motion Sensors for Plug-and-Play Human-Robot Interaction

thesis defense presentation | thesis citation and download | thesis projects

This thesis aims to enhance and simplify human-robot interaction in a variety of settings by using wearable devices and plug-and-play algorithms. Allowing more people to explore novel uses for robots could take a step towards ubiquitous robot assistants that have captured imaginations for decades.

Thesis Defense Presentation

Thesis

  • J. DelPreto, “Robots as Minions, Sidekicks, and Apprentices: Using Wearable Muscle, Brain, and Motion Sensors for Plug-and-Play Human-Robot Interaction,” PhD Thesis, Massachusetts Institute of Technology (MIT), 2021.
    [BibTeX] [Abstract] [Download PDF]

    This thesis presents algorithms and systems that use unobtrusive wearable sensors for muscle, brain, and motion activity to enable more plug-and-play human-robot interactions. Detecting discrete commands and continuous motions creates a communication vocabulary for remote control or collaboration, and learning frameworks allow robots to generalize from these interactions. Each of these building blocks focuses on lowering the barrier to casual users benefiting from robots by reducing the amount of training data, calibration data, and sensing hardware needed. This thesis thus takes a step towards more ubiquitous robot assistants that could extend humans’ capabilities and improve quality of life. Classification and motion estimation algorithms create a plug-and-play vocabulary for robot control and teaching. Supervised learning pipelines detect directional gestures from muscle signals via electromyography (EMG), and unsupervised learning pipelines expand the vocabulary without requiring data collection. Classifiers also detect error judgments in brain signals via electroencephalography (EEG). Continuous motions are detected in two ways. Arm or walking trajectories are estimated from an inertial measurement unit (IMU) by leveraging in-task EMG-based gestures that demarcate stationary waypoints; the paths are then refined in an apprenticeship phase using gestures. Hand heights during lifting tasks are also estimated using EMG. Two frameworks for learning by demonstration build on these foundations. A generalization algorithm uses a single example trajectory and a constraint library to synthesize trajectories with similar behaviors in new task configurations. Alternatively, for tasks where the robot can autonomously explore behaviors, an apprenticeship framework augments self-supervision with intermittent demonstrations. Systems use and evaluate these algorithms with three interaction paradigms. Subjects supervise and teleoperate robot minions that perform object selection or navigation in mock safety-critical or inaccessible settings. Robot sidekicks collaborate with users to jointly lift objects and perform assemblies. Finally, robot apprentices generalize cable-routing trajectories or grasping orientations from few human demonstrations. Experiments with each system evaluate classification or motion estimation performance and user interface efficacy. This thesis thus aims to enhance and simplify human-robot interaction in a variety of settings. Allowing more people to explore novel uses for robots could take a step towards ubiquitous robot assistants that have captured imaginations for decades.

    @phdthesis{delpreto2021thesisWearablesHRI,
    author={DelPreto, Joseph},
    title={Robots as Minions, Sidekicks, and Apprentices: Using Wearable Muscle, Brain, and Motion Sensors for Plug-and-Play Human-Robot Interaction},
    year={2021},
    month={September},
    school={Massachusetts Institute of Technology (MIT)},
    address={Massachusetts Institute of Technology (MIT)},
    url={https://people.csail.mit.edu/delpreto/thesis/delpreto_PhD-thesis_2021_wearables_human-robot-interaction.pdf},
    keywords={Human-Robot Interaction; Robotics; Physical Human-Robot Interaction; Human-Robot Collaboration; Wearable; Wearable Sensors; Wearable Devices; Supervision; Apprenticeship; Remote Control; Teleoperation; EMG; Muscle Activity; Gesture Detection; Learning from Demonstration; IMU; Motion Sensors; Inertial Measurement Unit; Motion Estimation; EEG; Brain Activity; Error-Related Potentials; ErrPs; Human-Centered Systems; Human in the Loop; Machine Learning; Neural Networks; Artificial Intelligence; Plug-and-Play; Data Augmentation; Clustering; Team Lifting; Load Sharing; User Studies; Team Fluency; Virtual Reality; VR; Computer Science; Robots},
    abstract={This thesis presents algorithms and systems that use unobtrusive wearable sensors for muscle, brain, and motion activity to enable more plug-and-play human-robot interactions. Detecting discrete commands and continuous motions creates a communication vocabulary for remote control or collaboration, and learning frameworks allow robots to generalize from these interactions. Each of these building blocks focuses on lowering the barrier to casual users benefiting from robots by reducing the amount of training data, calibration data, and sensing hardware needed. This thesis thus takes a step towards more ubiquitous robot assistants that could extend humans' capabilities and improve quality of life.
    Classification and motion estimation algorithms create a plug-and-play vocabulary for robot control and teaching. Supervised learning pipelines detect directional gestures from muscle signals via electromyography (EMG), and unsupervised learning pipelines expand the vocabulary without requiring data collection. Classifiers also detect error judgments in brain signals via electroencephalography (EEG). Continuous motions are detected in two ways. Arm or walking trajectories are estimated from an inertial measurement unit (IMU) by leveraging in-task EMG-based gestures that demarcate stationary waypoints; the paths are then refined in an apprenticeship phase using gestures. Hand heights during lifting tasks are also estimated using EMG.
    Two frameworks for learning by demonstration build on these foundations. A generalization algorithm uses a single example trajectory and a constraint library to synthesize trajectories with similar behaviors in new task configurations. Alternatively, for tasks where the robot can autonomously explore behaviors, an apprenticeship framework augments self-supervision with intermittent demonstrations.
    Systems use and evaluate these algorithms with three interaction paradigms. Subjects supervise and teleoperate robot minions that perform object selection or navigation in mock safety-critical or inaccessible settings. Robot sidekicks collaborate with users to jointly lift objects and perform assemblies. Finally, robot apprentices generalize cable-routing trajectories or grasping orientations from few human demonstrations. Experiments with each system evaluate classification or motion estimation performance and user interface efficacy.
    This thesis thus aims to enhance and simplify human-robot interaction in a variety of settings. Allowing more people to explore novel uses for robots could take a step towards ubiquitous robot assistants that have captured imaginations for decades.
    }
    }

View or download the thesis PDF here

This thesis presents algorithms and systems that use unobtrusive wearable sensors for muscle, brain, and motion activity to enable more plug-and-play human-robot interactions. Detecting discrete commands and continuous motions creates a communication vocabulary for remote control or collaboration, and learning frameworks allow robots to generalize from these interactions. Each of these building blocks focuses on lowering the barrier to casual users benefiting from robots by reducing the amount of training data, calibration data, and sensing hardware needed. This thesis thus takes a step towards more ubiquitous robot assistants that could extend humans’ capabilities and improve quality of life.

Classification and motion estimation algorithms create a plug-and-play vocabulary for robot control and teaching. Supervised learning pipelines detect directional gestures from muscle signals via electromyography (EMG), and unsupervised learning pipelines expand the vocabulary without requiring data collection. Classifiers also detect error judgments in brain signals via electroencephalography (EEG). Continuous motions are detected in two ways. Arm or walking trajectories are estimated from an inertial measurement unit (IMU) by leveraging in-task EMG-based gestures that demarcate stationary waypoints; the paths are then refined in an apprenticeship phase using gestures. Hand heights during lifting tasks are also estimated using EMG.

Two frameworks for learning by demonstration build on these foundations. A generalization algorithm uses a single example trajectory and a constraint library to synthesize trajectories with similar behaviors in new task configurations. Alternatively, for tasks where the robot can autonomously explore behaviors, an apprenticeship framework augments self-supervision with intermittent demonstrations.

Systems use and evaluate these algorithms with three interaction paradigms. Subjects supervise and teleoperate robot minions that perform object selection or navigation in mock safety-critical or inaccessible settings. Robot sidekicks collaborate with users to jointly lift objects and perform assemblies. Finally, robot apprentices generalize cable-routing trajectories or grasping orientations from few human demonstrations. Experiments with each system evaluate classification or motion estimation performance and user interface efficacy.

Thesis Projects

A user makes gestures to pilot a drone through hoops

Controlling drones and other robots with gestures

Control drones and other robots using gestures and wearable sensors

Use muscle signals to lift objects with robots - lift a flexible rubber sheet together

Lifting objects with robots using muscle signals

Lift objects with robots using muscle signals

A human supervises and remotely controls a robot as it learns to grasp

Helping robots learn using demonstrations

Help robots learn by providing demonstrations using virtual reality

Supervise robots using brainwaves and hand gestures

Correcting robots using brain signals and hand gestures

Correct robots using hand gestures and brainwaves

All Projects