My research focuses on using wearable sensors to make human-machine interaction more natural, to learn more about human behavior, and to provide people with physical or cognitive support. This includes making smart clothing such as gloves that detect sign language or medical garments with embedded sensing, and curating multimodal datasets to learn about how people perform daily tasks and how robots might learn from them.

To make it easier to work directly with robots, I’ve explored using muscle and motion signals to remotely control robots, muscle signals to physically collaborate with robots, or brain signals to supervise robots. If we can communicate with robots in ways that are more similar to how we communicate with other people, enabling robots to adapt to us rather than the other way around, then robots could become more integrated into our daily lives and assist us more effectively.

Other research projects that I’ve worked on at MIT include helping robots learn using demonstrations, an underwater communication system for a soft robotic fish, a system for people to create custom printable robots on demand, and an educational distributed robotic garden.


Publications

View all publications on Google Scholar, or click below to browse them here

Click to expand publications

2023

  • J. DelPreto, C. L. Brunelle, A. G. Taghian, and D. Rus, “Sensorizing a Compression Sleeve for Continuous Pressure Monitoring and Lymphedema Treatment Using Pneumatic or Resistive Sensors,” IEEE International Conference on Soft Robotics (RoboSoft), 2023.
    [BibTeX] [Abstract] [Download PDF]

    Smart soft wearable devices have great potential to change how technology is integrated into daily life. A particularly impactful and growing application is continuous medical monitoring; being able to stream physiological and behavioral information creates personalized datasets that can lead to more tailored treatments, diagnoses, and research. An area that can greatly benefit from these developments is lymphedema management, which aims to prevent a potentially irreversible swelling of limbs due to causes such as breast cancer surgeries. Compression sleeves are the state of the art for treatment, but many open questions remain regarding effective pressure and usage prescriptions. To help address these, this work presents a soft pressure sensor, a way to integrate it into wearable devices, and sensorized compression sleeves that continuously monitor pressure and usage. There are significant challenges to developing sensors for high-pressure applications on the human body, including operating between soft compliant interfaces, being safe and unobtrusive, and reducing calibration for new users. This work compares two sensing approaches for wearable applications: a custom pouch-based pneumatic sensor, and a commercially available resistive sensor. Experiments systematically explore design considerations including sensitivity to ambient temperature and pressure, characterize sensor response curves, and evaluate expected accuracies and required calibrations. Sensors are then integrated into compression sleeves and worn for over 115 hours spanning 10 days.

    @article{delpreto2023sensorizedCompressionSleeve,
    title={Sensorizing a Compression Sleeve for Continuous Pressure Monitoring and Lymphedema Treatment Using Pneumatic or Resistive Sensors},
    author={DelPreto, Joseph and Brunelle, Cheryl L. and Taghian, Alphonse G. and Rus, Daniela},
    journal={IEEE International Conference on Soft Robotics (RoboSoft)},
    organization={IEEE},
    year={2023},
    month={April},
    url={http://www.josephdelpreto.com/wp-content/uploads/2023/04/DelPreto_smart-sleeve_RoboSoft2023.pdf},
    abstract={Smart soft wearable devices have great potential to change how technology is integrated into daily life. A particularly impactful and growing application is continuous medical monitoring; being able to stream physiological and behavioral information creates personalized datasets that can lead to more tailored treatments, diagnoses, and research. An area that can greatly benefit from these developments is lymphedema management, which aims to prevent a potentially irreversible swelling of limbs due to causes such as breast cancer surgeries. Compression sleeves are the state of the art for treatment, but many open questions remain regarding effective pressure and usage prescriptions. To help address these, this work presents a soft pressure sensor, a way to integrate it into wearable devices, and sensorized compression sleeves that continuously monitor pressure and usage. There are significant challenges to developing sensors for high-pressure applications on the human body, including operating between soft compliant interfaces, being safe and unobtrusive, and reducing calibration for new users. This work compares two sensing approaches for wearable applications: a custom pouch-based pneumatic sensor, and a commercially available resistive sensor. Experiments systematically explore design considerations including sensitivity to ambient temperature and pressure, characterize sensor response curves, and evaluate expected accuracies and required calibrations. Sensors are then integrated into compression sleeves and worn for over 115 hours spanning 10 days.}
    }

2022

  • J. DelPreto, C. Liu, Y. Luo, M. Foshey, Y. Li, A. Torralba, W. Matusik, and D. Rus, “ActionSense: A Multimodal Dataset and Recording Framework for Human Activities Using Wearable Sensors in a Kitchen Environment,” in Neural Information Processing Systems (NeurIPS) Track on Datasets and Benchmarks, 2022.
    [BibTeX] [Abstract] [Download PDF]

    This paper introduces ActionSense, a multimodal dataset and recording framework with an emphasis on wearable sensing in a kitchen environment. It provides rich, synchronized data streams along with ground truth data to facilitate learning pipelines that could extract insights about how humans interact with the physical world during activities of daily living, and help lead to more capable and collaborative robot assistants. The wearable sensing suite captures motion, force, and attention information; it includes eye tracking with a first-person camera, forearm muscle activity sensors, a body-tracking system using 17 inertial sensors, finger-tracking gloves, and custom tactile sensors on the hands that use a matrix of conductive threads. This is coupled with activity labels and with externally-captured data from multiple RGB cameras, a depth camera, and microphones. The specific tasks recorded in ActionSense are designed to highlight lower-level physical skills and higher-level scene reasoning or action planning. They include simple object manipulations (e.g., stacking plates), dexterous actions (e.g., peeling or cutting vegetables), and complex action sequences (e.g., setting a table or loading a dishwasher). The resulting dataset and underlying experiment framework are available at https://action-sense.csail.mit.edu. Preliminary networks and analyses explore modality subsets and cross-modal correlations. ActionSense aims to support applications including learning from demonstrations, dexterous robot control, cross-modal predictions, and fine-grained action segmentation. It could also help inform the next generation of smart textiles that may one day unobtrusively send rich data streams to in-home collaborative or autonomous robot assistants.

    @inproceedings{delpretoLiu2022actionSense,
    title={{ActionSense}: A Multimodal Dataset and Recording Framework for Human Activities Using Wearable Sensors in a Kitchen Environment},
    author={Joseph DelPreto and Chao Liu and Yiyue Luo and Michael Foshey and Yunzhu Li and Antonio Torralba and Wojciech Matusik and Daniela Rus},
    booktitle={Neural Information Processing Systems (NeurIPS) Track on Datasets and Benchmarks},
    year={2022},
    url={https://action-sense.csail.mit.edu},
    abstract={This paper introduces ActionSense, a multimodal dataset and recording framework with an emphasis on wearable sensing in a kitchen environment. It provides rich, synchronized data streams along with ground truth data to facilitate learning pipelines that could extract insights about how humans interact with the physical world during activities of daily living, and help lead to more capable and collaborative robot assistants. The wearable sensing suite captures motion, force, and attention information; it includes eye tracking with a first-person camera, forearm muscle activity sensors, a body-tracking system using 17 inertial sensors, finger-tracking gloves, and custom tactile sensors on the hands that use a matrix of conductive threads. This is coupled with activity labels and with externally-captured data from multiple RGB cameras, a depth camera, and microphones. The specific tasks recorded in ActionSense are designed to highlight lower-level physical skills and higher-level scene reasoning or action planning. They include simple object manipulations (e.g., stacking plates), dexterous actions (e.g., peeling or cutting vegetables), and complex action sequences (e.g., setting a table or loading a dishwasher). The resulting dataset and underlying experiment framework are available at https://action-sense.csail.mit.edu. Preliminary networks and analyses explore modality subsets and cross-modal correlations. ActionSense aims to support applications including learning from demonstrations, dexterous robot control, cross-modal predictions, and fine-grained action segmentation. It could also help inform the next generation of smart textiles that may one day unobtrusively send rich data streams to in-home collaborative or autonomous robot assistants.}
    }

  • J. DelPreto, J. Hughes, M. D’Aria, M. de Fazio, and D. Rus, “A Wearable Smart Glove and Its Application of Pose and Gesture Detection to Sign Language Classification,” IEEE Robotics and Automation Letters (RA-L), vol. 7, iss. 4, 2022. doi:10.1109/LRA.2022.3191232
    [BibTeX] [Abstract] [Download PDF]

    Advances in soft sensors coupled with machine learning are enabling increasingly capable wearable systems. Since hand motion in particular can convey useful information for developing intuitive interfaces, glove-based systems can have a significant impact on many application areas. A key remaining challenge for wearables is to capture, process, and analyze data from the high-degree-of-freedom hand in real time. We propose using a commercially available conductive knit to create an unobtrusive network of resistive sensors that spans all hand joints, coupling this with an accelerometer, and deploying machine learning on a low-profile microcontroller to process and classify data. This yields a self-contained wearable device with rich sensing capabilities for hand pose and orientation, low fabrication time, and embedded activity prediction. To demonstrate its capabilities, we use it to detect static poses and dynamic gestures from American Sign Language (ASL). By pre-training a long short-term memory (LSTM) neural network and using tools to deploy it in an embedded context, the glove and an ST microcontroller can classify 12 ASL letters and 12 ASL words in real time. Using a leave-one-experiment-out cross validation methodology, networks successfully classify 96.3% of segmented examples and generate correct rolling predictions during 92.8% of real-time streaming trials.

    @article{delpretoHughes2022smartGlove,
    title={A Wearable Smart Glove and Its Application of Pose and Gesture Detection to Sign Language Classification},
    author={DelPreto, Joseph and Hughes, Josie and D'Aria, Matteo and de Fazio, Marco and Rus, Daniela},
    journal={IEEE Robotics and Automation Letters (RA-L)},
    organization={IEEE},
    year={2022},
    month={October},
    volume={7},
    number={4},
    doi={10.1109/LRA.2022.3191232},
    url={https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9830849},
    abstract={Advances in soft sensors coupled with machine learning are enabling increasingly capable wearable systems. Since hand motion in particular can convey useful information for developing intuitive interfaces, glove-based systems can have a significant impact on many application areas. A key remaining challenge for wearables is to capture, process, and analyze data from the high-degree-of-freedom hand in real time. We propose using a commercially available conductive knit to create an unobtrusive network of resistive sensors that spans all hand joints, coupling this with an accelerometer, and deploying machine learning on a low-profile microcontroller to process and classify data. This yields a self-contained wearable device with rich sensing capabilities for hand pose and orientation, low fabrication time, and embedded activity prediction. To demonstrate its capabilities, we use it to detect static poses and dynamic gestures from American Sign Language (ASL). By pre-training a long short-term memory (LSTM) neural network and using tools to deploy it in an embedded context, the glove and an ST microcontroller can classify 12 ASL letters and 12 ASL words in real time. Using a leave-one-experiment-out cross validation methodology, networks successfully classify 96.3% of segmented examples and generate correct rolling predictions during 92.8% of real-time streaming trials.}
    }

2021

  • J. DelPreto, “Robots as Minions, Sidekicks, and Apprentices: Using Wearable Muscle, Brain, and Motion Sensors for Plug-and-Play Human-Robot Interaction,” PhD Thesis, Massachusetts Institute of Technology (MIT), 2021.
    [BibTeX] [Abstract] [Download PDF]

    This thesis presents algorithms and systems that use unobtrusive wearable sensors for muscle, brain, and motion activity to enable more plug-and-play human-robot interactions. Detecting discrete commands and continuous motions creates a communication vocabulary for remote control or collaboration, and learning frameworks allow robots to generalize from these interactions. Each of these building blocks focuses on lowering the barrier to casual users benefiting from robots by reducing the amount of training data, calibration data, and sensing hardware needed. This thesis thus takes a step towards more ubiquitous robot assistants that could extend humans’ capabilities and improve quality of life. Classification and motion estimation algorithms create a plug-and-play vocabulary for robot control and teaching. Supervised learning pipelines detect directional gestures from muscle signals via electromyography (EMG), and unsupervised learning pipelines expand the vocabulary without requiring data collection. Classifiers also detect error judgments in brain signals via electroencephalography (EEG). Continuous motions are detected in two ways. Arm or walking trajectories are estimated from an inertial measurement unit (IMU) by leveraging in-task EMG-based gestures that demarcate stationary waypoints; the paths are then refined in an apprenticeship phase using gestures. Hand heights during lifting tasks are also estimated using EMG. Two frameworks for learning by demonstration build on these foundations. A generalization algorithm uses a single example trajectory and a constraint library to synthesize trajectories with similar behaviors in new task configurations. Alternatively, for tasks where the robot can autonomously explore behaviors, an apprenticeship framework augments self-supervision with intermittent demonstrations. Systems use and evaluate these algorithms with three interaction paradigms. Subjects supervise and teleoperate robot minions that perform object selection or navigation in mock safety-critical or inaccessible settings. Robot sidekicks collaborate with users to jointly lift objects and perform assemblies. Finally, robot apprentices generalize cable-routing trajectories or grasping orientations from few human demonstrations. Experiments with each system evaluate classification or motion estimation performance and user interface efficacy. This thesis thus aims to enhance and simplify human-robot interaction in a variety of settings. Allowing more people to explore novel uses for robots could take a step towards ubiquitous robot assistants that have captured imaginations for decades.

    @phdthesis{delpreto2021thesisWearablesHRI,
    author={DelPreto, Joseph},
    title={Robots as Minions, Sidekicks, and Apprentices: Using Wearable Muscle, Brain, and Motion Sensors for Plug-and-Play Human-Robot Interaction},
    year={2021},
    month={September},
    school={Massachusetts Institute of Technology (MIT)},
    address={Massachusetts Institute of Technology (MIT)},
    url={https://people.csail.mit.edu/delpreto/thesis/delpreto_PhD-thesis_2021_wearables_human-robot-interaction.pdf},
    keywords={Human-Robot Interaction; Robotics; Physical Human-Robot Interaction; Human-Robot Collaboration; Wearable; Wearable Sensors; Wearable Devices; Supervision; Apprenticeship; Remote Control; Teleoperation; EMG; Muscle Activity; Gesture Detection; Learning from Demonstration; IMU; Motion Sensors; Inertial Measurement Unit; Motion Estimation; EEG; Brain Activity; Error-Related Potentials; ErrPs; Human-Centered Systems; Human in the Loop; Machine Learning; Neural Networks; Artificial Intelligence; Plug-and-Play; Data Augmentation; Clustering; Team Lifting; Load Sharing; User Studies; Team Fluency; Virtual Reality; VR; Computer Science; Robots},
    abstract={This thesis presents algorithms and systems that use unobtrusive wearable sensors for muscle, brain, and motion activity to enable more plug-and-play human-robot interactions. Detecting discrete commands and continuous motions creates a communication vocabulary for remote control or collaboration, and learning frameworks allow robots to generalize from these interactions. Each of these building blocks focuses on lowering the barrier to casual users benefiting from robots by reducing the amount of training data, calibration data, and sensing hardware needed. This thesis thus takes a step towards more ubiquitous robot assistants that could extend humans' capabilities and improve quality of life.
    Classification and motion estimation algorithms create a plug-and-play vocabulary for robot control and teaching. Supervised learning pipelines detect directional gestures from muscle signals via electromyography (EMG), and unsupervised learning pipelines expand the vocabulary without requiring data collection. Classifiers also detect error judgments in brain signals via electroencephalography (EEG). Continuous motions are detected in two ways. Arm or walking trajectories are estimated from an inertial measurement unit (IMU) by leveraging in-task EMG-based gestures that demarcate stationary waypoints; the paths are then refined in an apprenticeship phase using gestures. Hand heights during lifting tasks are also estimated using EMG.
    Two frameworks for learning by demonstration build on these foundations. A generalization algorithm uses a single example trajectory and a constraint library to synthesize trajectories with similar behaviors in new task configurations. Alternatively, for tasks where the robot can autonomously explore behaviors, an apprenticeship framework augments self-supervision with intermittent demonstrations.
    Systems use and evaluate these algorithms with three interaction paradigms. Subjects supervise and teleoperate robot minions that perform object selection or navigation in mock safety-critical or inaccessible settings. Robot sidekicks collaborate with users to jointly lift objects and perform assemblies. Finally, robot apprentices generalize cable-routing trajectories or grasping orientations from few human demonstrations. Experiments with each system evaluate classification or motion estimation performance and user interface efficacy.
    This thesis thus aims to enhance and simplify human-robot interaction in a variety of settings. Allowing more people to explore novel uses for robots could take a step towards ubiquitous robot assistants that have captured imaginations for decades.
    }
    }

2020

  • J. DelPreto, J. I. Lipton, L. Sanneman, A. J. Fay, C. Fourie, C. Choi, and D. Rus, “Helping Robots Learn: A Human-Robot Master-Apprentice Model Using Demonstrations Via Virtual Reality Teleoperation,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020. doi:10.1109/ICRA40945.2020.9196754
    [BibTeX] [Abstract] [Download PDF]

    As artificial intelligence becomes an increasingly prevalent method of enhancing robotic capabilities, it is important to consider effective ways to train these learning pipelines and to leverage human expertise. Working towards these goals, a master-apprentice model is presented and is evaluated during a grasping task for effectiveness and human perception. The apprenticeship model augments self-supervised learning with learning by demonstration, efficiently using the human’s time and expertise while facilitating future scalability to supervision of multiple robots; the human provides demonstrations via virtual reality when the robot cannot complete the task autonomously. Experimental results indicate that the robot learns a grasping task with the apprenticeship model faster than with a solely self-supervised approach and with fewer human interventions than a solely demonstration-based approach; 100\% grasping success is obtained after 150 grasps with 19 demonstrations. Preliminary user studies evaluating workload, usability, and effectiveness of the system yield promising results for system scalability and deployability. They also suggest a tendency for users to overestimate the robot’s skill and to generalize its capabilities, especially as learning improves.

    @inproceedings{delpretoLipton2020helpingRobotsLearn,
    title={Helping Robots Learn: A Human-Robot Master-Apprentice Model Using Demonstrations Via Virtual Reality Teleoperation},
    author={DelPreto, Joseph and Lipton, Jeffrey I. and Sanneman, Lindsay and Fay, Aidan J. and Fourie, Christopher and Choi, Changhyun and Rus, Daniela },
    booktitle={2020 IEEE International Conference on Robotics and Automation (ICRA)},
    year={2020},
    month={May},
    publisher={IEEE},
    doi={10.1109/ICRA40945.2020.9196754},
    URL={http://people.csail.mit.edu/delpreto/icra2020/delpreto-lipton_helping-robots-learn_icra2020.pdf},
    abstract={As artificial intelligence becomes an increasingly prevalent method of enhancing robotic capabilities, it is important to consider effective ways to train these learning pipelines and to leverage human expertise. Working towards these goals, a master-apprentice model is presented and is evaluated during a grasping task for effectiveness and human perception. The apprenticeship model augments self-supervised learning with learning by demonstration, efficiently using the human's time and expertise while facilitating future scalability to supervision of multiple robots; the human provides demonstrations via virtual reality when the robot cannot complete the task autonomously. Experimental results indicate that the robot learns a grasping task with the apprenticeship model faster than with a solely self-supervised approach and with fewer human interventions than a solely demonstration-based approach; 100\% grasping success is obtained after 150 grasps with 19 demonstrations. Preliminary user studies evaluating workload, usability, and effectiveness of the system yield promising results for system scalability and deployability. They also suggest a tendency for users to overestimate the robot's skill and to generalize its capabilities, especially as learning improves.}
    }

  • J. DelPreto and D. Rus, “Plug-and-Play Gesture Control Using Muscle and Motion Sensors,” in Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI), New York, NY, USA, 2020, p. 439–448. doi:10.1145/3319502.3374823
    [BibTeX] [Abstract] [Download PDF]

    As the capacity for machines to extend human capabilities continues to grow, the communication channels used must also expand. Allowing machines to interpret nonverbal commands such as gestures can help make interactions more similar to interactions with another person. Yet to be pervasive and effective in realistic scenarios, such interfaces should not require significant sensing infrastructure or per-user setup time. The presented work takes a step towards these goals by using wearable muscle and motion sensors to detect gestures without dedicated calibration or training procedures. An algorithm is presented for clustering unlabeled streaming data in real time, and it is applied to adaptively thresholding muscle and motion signals acquired via electromyography (EMG) and an inertial measurement unit (IMU). This enables plug-and-play online detection of arm stiffening, fist clenching, rotation gestures, and forearm activation. It also augments a neural network pipeline, trained only on strategically chosen training data from previous users, to detect left, right, up, and down gestures. Together, these pipelines offer a plug-and-play gesture vocabulary suitable for remotely controlling a robot. Experiments with 6 subjects evaluate classifier performance and interface efficacy. Classifiers correctly identified 97.6\% of 1,200 cued gestures, and a drone correctly responded to 81.6\% of 1,535 unstructured gestures as subjects remotely controlled it through target hoops during 119 minutes of total flight time.

    @inproceedings{delpreto2020emgImuGesturesDrone,
    author={DelPreto, Joseph and Rus, Daniela},
    title={Plug-and-Play Gesture Control Using Muscle and Motion Sensors},
    year={2020},
    month={March},
    isbn={9781450367462},
    publisher={ACM},
    address={New York, NY, USA},
    url={https://dl.acm.org/doi/10.1145/3319502.3374823?cid=99658989019},
    doi={10.1145/3319502.3374823},
    booktitle={Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI)},
    pages={439–448},
    numpages={10},
    keywords={Robotics, EMG, Wearable Sensors, Human-Robot Interaction, Gestures, Plug-and-Play, Machine Learning, IMU, Teleoperation},
    location={Cambridge, United Kingdom},
    series={HRI ’20},
    abstract={As the capacity for machines to extend human capabilities continues to grow, the communication channels used must also expand. Allowing machines to interpret nonverbal commands such as gestures can help make interactions more similar to interactions with another person. Yet to be pervasive and effective in realistic scenarios, such interfaces should not require significant sensing infrastructure or per-user setup time. The presented work takes a step towards these goals by using wearable muscle and motion sensors to detect gestures without dedicated calibration or training procedures. An algorithm is presented for clustering unlabeled streaming data in real time, and it is applied to adaptively thresholding muscle and motion signals acquired via electromyography (EMG) and an inertial measurement unit (IMU). This enables plug-and-play online detection of arm stiffening, fist clenching, rotation gestures, and forearm activation. It also augments a neural network pipeline, trained only on strategically chosen training data from previous users, to detect left, right, up, and down gestures. Together, these pipelines offer a plug-and-play gesture vocabulary suitable for remotely controlling a robot. Experiments with 6 subjects evaluate classifier performance and interface efficacy. Classifiers correctly identified 97.6\% of 1,200 cued gestures, and a drone correctly responded to 81.6\% of 1,535 unstructured gestures as subjects remotely controlled it through target hoops during 119 minutes of total flight time.}
    }

  • J. DelPreto, A. F. Salazar-Gomez, S. Gil, R. Hasani, F. H. Guenther, and D. Rus, “Plug-and-Play Supervisory Control Using Muscle and Brain Signals for Real-Time Gesture and Error Detection,” Autonomous Robots, 2020. doi:10.1007/s10514-020-09916-x
    [BibTeX] [Abstract] [Download PDF]

    Effective human supervision of robots can be key for ensuring correct robot operation in a variety of potentially safety-critical scenarios. This paper takes a step towards fast and reliable human intervention in supervisory control tasks by combining two streams of human biosignals: muscle and brain activity acquired via EMG and EEG, respectively. It presents continuous classification of left and right hand-gestures using muscle signals, time-locked classification of error-related potentials using brain signals (unconsciously produced when observing an error), and a framework that combines these pipelines to detect and correct robot mistakes during multiple-choice tasks. The resulting hybrid system is evaluated in a “plug-and-play” fashion with 7 untrained subjects supervising an autonomous robot performing a target selection task. Offline analysis further explores the EMG classification performance, and investigates methods to select subsets of training data that may facilitate generalizable plug-and-play classifiers.

    @article{delpreto2020emgeegsupervisory,
    title={Plug-and-Play Supervisory Control Using Muscle and Brain Signals for Real-Time Gesture and Error Detection},
    author={DelPreto, Joseph and Salazar-Gomez, Andres F. and Gil, Stephanie and Hasani, Ramin and Guenther, Frank H. and Rus, Daniela},
    journal={Autonomous Robots},
    year={2020},
    month={August},
    publisher={Springer},
    doi={10.1007/s10514-020-09916-x},
    url={https://link.springer.com/article/10.1007/s10514-020-09916-x},
    abstract={Effective human supervision of robots can be key for ensuring correct robot operation in a variety of potentially safety-critical scenarios. This paper takes a step towards fast and reliable human intervention in supervisory control tasks by combining two streams of human biosignals: muscle and brain activity acquired via EMG and EEG, respectively. It presents continuous classification of left and right hand-gestures using muscle signals, time-locked classification of error-related potentials using brain signals (unconsciously produced when observing an error), and a framework that combines these pipelines to detect and correct robot mistakes during multiple-choice tasks. The resulting hybrid system is evaluated in a ``plug-and-play'' fashion with 7 untrained subjects supervising an autonomous robot performing a target selection task. Offline analysis further explores the EMG classification performance, and investigates methods to select subsets of training data that may facilitate generalizable plug-and-play classifiers.}
    }

2019

  • J. DelPreto and D. Rus, “Sharing the Load: Human-Robot Team Lifting Using Muscle Activity,” in 2019 IEEE International Conference on Robotics and Automation (ICRA), 2019. doi:10.1109/ICRA.2019.8794414
    [BibTeX] [Abstract] [Download PDF]

    Seamless communication of desired motions and goals is essential for enabling effective physical human-robot collaboration. In such cases, muscle activity measured via surface electromyography (EMG) can provide insight into a person’s intentions while minimally distracting from the task. The presented system uses two muscle signals to create a control framework for team lifting tasks in which a human and robot lift an object together. A continuous setpoint algorithm uses biceps activity to estimate changes in the user’s hand height, and also allows the user to explicitly adjust the robot by stiffening or relaxing their arm. In addition to this pipeline, a neural network trained only on previous users classifies biceps and triceps activity to detect up or down gestures on a rolling basis; this enables finer control over the robot and expands the feasible workspace. The resulting system is evaluated by 10 untrained subjects performing a variety of team lifting and assembly tasks with rigid and flexible objects.

    @inproceedings{delpreto2019emglifting,
    title={Sharing the Load: Human-Robot Team Lifting Using Muscle Activity},
    author={DelPreto, Joseph and Rus, Daniela},
    booktitle={2019 IEEE International Conference on Robotics and Automation (ICRA)},
    year={2019},
    month={May},
    publisher={IEEE},
    doi={10.1109/ICRA.2019.8794414},
    URL={http://people.csail.mit.edu/delpreto/icra2019/delpreto_emg_team_lifting_ICRA19.pdf},
    abstract={Seamless communication of desired motions and goals is essential for enabling effective physical human-robot collaboration. In such cases, muscle activity measured via surface electromyography (EMG) can provide insight into a person's intentions while minimally distracting from the task. The presented system uses two muscle signals to create a control framework for team lifting tasks in which a human and robot lift an object together. A continuous setpoint algorithm uses biceps activity to estimate changes in the user's hand height, and also allows the user to explicitly adjust the robot by stiffening or relaxing their arm. In addition to this pipeline, a neural network trained only on previous users classifies biceps and triceps activity to detect up or down gestures on a rolling basis; this enables finer control over the robot and expands the feasible workspace. The resulting system is evaluated by 10 untrained subjects performing a variety of team lifting and assembly tasks with rigid and flexible objects.}
    }

2018

  • J. DelPreto, A. F. Salazar-Gomez, S. Gil, R. M. Hasani, F. H. Guenther, and D. Rus, “Plug-and-Play Supervisory Control Using Muscle and Brain Signals for Real-Time Gesture and Error Detection,” in Robotics: Science and Systems (RSS), 2018. doi:10.15607/RSS.2018.XIV.063
    [BibTeX] [Abstract] [Download PDF]

    Control of robots in safety-critical tasks and situations where costly errors may occur is paramount for realizing the vision of pervasive human-robot collaborations. For these cases, the ability to use human cognition in the loop can be key for recuperating safe robot operation. This paper combines two streams of human biosignals, electrical muscle and brain activity via EMG and EEG, respectively, to achieve fast and accurate human intervention in a supervisory control task. In particular, this paper presents an end-to-end system for continuous rolling-window classification of gestures that allows the human to actively correct the robot on demand, discrete classification of Error-Related Potential signals (unconsciously produced by the human supervisor’s brain when observing a robot error), and a framework that integrates these two classification streams for fast and effective human intervention. The system also allows ‘plug-and-play’ operation, demonstrating accurate performance even with new users whose biosignals have not been used for training the classifiers. The resulting hybrid control system for safety-critical situations is evaluated with 7 untrained human subjects in a supervisory control scenario where an autonomous robot performs a multi-target selection task.

    @inproceedings{delpreto2018emgeegsupervisory,
    title={Plug-and-Play Supervisory Control Using Muscle and Brain Signals for Real-Time Gesture and Error Detection},
    author={DelPreto, Joseph and Salazar-Gomez, Andres F. and Gil, Stephanie and Hasani, Ramin M. and Guenther, Frank H. and Rus, Daniela},
    booktitle={Robotics: Science and Systems (RSS)},
    year={2018},
    month={June},
    doi={10.15607/RSS.2018.XIV.063},
    url={http://groups.csail.mit.edu/drl/wiki/images/d/d8/delpreto_rss2018_emg_eeg.pdf},
    abstract={Control of robots in safety-critical tasks and situations where costly errors may occur is paramount for realizing the vision of pervasive human-robot collaborations. For these cases, the ability to use human cognition in the loop can be key for recuperating safe robot operation. This paper combines two streams of human biosignals, electrical muscle and brain activity via EMG and EEG, respectively, to achieve fast and accurate human intervention in a supervisory control task. In particular, this paper presents an end-to-end system for continuous rolling-window classification of gestures that allows the human to actively correct the robot on demand, discrete classification of Error-Related Potential signals (unconsciously produced by the human supervisor's brain when observing a robot error), and a framework that integrates these two classification streams for fast and effective human intervention. The system also allows 'plug-and-play' operation, demonstrating accurate performance even with new users whose biosignals have not been used for training the classifiers. The resulting hybrid control system for safety-critical situations is evaluated with 7 untrained human subjects in a supervisory control scenario where an autonomous robot performs a multi-target selection task.}
    }

  • R. K. Katzschmann, J. DelPreto, R. MacCurdy, and D. Rus, “Exploration of underwater life with an acoustically controlled soft robotic fish,” Science Robotics, vol. 3, iss. 16, 2018. doi:10.1126/scirobotics.aar3449
    [BibTeX] [Abstract] [Download PDF]

    Closeup exploration of underwater life requires new forms of interaction, using biomimetic creatures that are capable of agile swimming maneuvers, equipped with cameras, and supported by remote human operation. Current robotic prototypes do not provide adequate platforms for studying marine life in their natural habitats. This work presents the design, fabrication, control, and oceanic testing of a soft robotic fish that can swim in three dimensions to continuously record the aquatic life it is following or engaging. Using a miniaturized acoustic communication module, a diver can direct the fish by sending commands such as speed, turning angle, and dynamic vertical diving. This work builds on previous generations of robotic fish that were restricted to one plane in shallow water and lacked remote control. Experimental results gathered from tests along coral reefs in the Pacific Ocean show that the robotic fish can successfully navigate around aquatic life at depths ranging from 0 to 18 meters. Furthermore, our robotic fish exhibits a lifelike undulating tail motion enabled by a soft robotic actuator design that can potentially facilitate a more natural integration into the ocean environment. We believe that our study advances beyond what is currently achievable using traditional thruster-based and tethered autonomous underwater vehicles, demonstrating methods that can be used in the future for studying the interactions of aquatic life and ocean dynamics.

    @article{katzschmann2018explorationsoftfish,
    title={Exploration of underwater life with an acoustically controlled soft robotic fish},
    author={Katzschmann, Robert K and DelPreto, Joseph and MacCurdy, Robert and Rus, Daniela},
    journal={Science Robotics},
    volume={3},
    number={16},
    elocation-id={eaar3449},
    year={2018},
    month={March},
    publisher={Science Robotics},
    doi={10.1126/scirobotics.aar3449},
    url={https://robotics.sciencemag.org/content/3/16/eaar3449.full.pdf},
    abstract={Closeup exploration of underwater life requires new forms of interaction, using biomimetic creatures that are capable of agile swimming maneuvers, equipped with cameras, and supported by remote human operation. Current robotic prototypes do not provide adequate platforms for studying marine life in their natural habitats. This work presents the design, fabrication, control, and oceanic testing of a soft robotic fish that can swim in three dimensions to continuously record the aquatic life it is following or engaging. Using a miniaturized acoustic communication module, a diver can direct the fish by sending commands such as speed, turning angle, and dynamic vertical diving. This work builds on previous generations of robotic fish that were restricted to one plane in shallow water and lacked remote control. Experimental results gathered from tests along coral reefs in the Pacific Ocean show that the robotic fish can successfully navigate around aquatic life at depths ranging from 0 to 18 meters. Furthermore, our robotic fish exhibits a lifelike undulating tail motion enabled by a soft robotic actuator design that can potentially facilitate a more natural integration into the ocean environment. We believe that our study advances beyond what is currently achievable using traditional thruster-based and tethered autonomous underwater vehicles, demonstrating methods that can be used in the future for studying the interactions of aquatic life and ocean dynamics.}
    }

  • C. Choi, W. Schwarting, J. DelPreto, and D. Rus, “Learning object grasping for soft robot hands,” IEEE Robotics and Automation Letters (RA-L), vol. 3, iss. 3, 2018. doi:10.1109/LRA.2018.2810544
    [BibTeX] [Abstract] [Download PDF]

    We present a three-dimensional deep convolutional neural network (3D CNN) approach for grasping unknown objects with soft hands. Soft hands are compliant and capable of handling uncertainty in sensing and actuation, but come at the cost of unpredictable deformation of the soft fingers. Traditional model-driven grasping approaches, which assume known models for objects, robot hands, and stable grasps with expected contacts, are inapplicable to such soft hands, since predicting contact points between objects and soft hands is not straightforward. Our solution adopts a deep CNN approach to find good caging grasps for previously unseen objects by learning effective features and a classifier from point cloud data. Unlike recent CNN models applied to robotic grasping which have been trained on 2D or 2.5D images and limited to a fixed top grasping direction, we exploit the power of a 3D CNN model to estimate suitable grasp poses from multiple grasping directions (top and side directions) and wrist orientations, which has great potential for geometry-related robotic tasks. Our soft hands guided by the 3D CNN algorithm show 87\% successful grasping on previously unseen objects. A set of comparative evaluations shows the robustness of our approach with respect to noise and occlusions.

    @article{choi2018learningsoftgrasp,
    title={Learning object grasping for soft robot hands},
    author={Choi, Changhyun and Schwarting, Wilko and DelPreto, Joseph and Rus, Daniela},
    journal={IEEE Robotics and Automation Letters (RA-L)},
    organization={IEEE},
    year={2018},
    month={July},
    volume={3},
    number={3},
    doi={10.1109/LRA.2018.2810544},
    ISSN={2377-3766},
    url={https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8304630},
    abstract={We present a three-dimensional deep convolutional neural network (3D CNN) approach for grasping unknown objects with soft hands. Soft hands are compliant and capable of handling uncertainty in sensing and actuation, but come at the cost of unpredictable deformation of the soft fingers. Traditional model-driven grasping approaches, which assume known models for objects, robot hands, and stable grasps with expected contacts, are inapplicable to such soft hands, since predicting contact points between objects and soft hands is not straightforward. Our solution adopts a deep CNN approach to find good caging grasps for previously unseen objects by learning effective features and a classifier from point cloud data. Unlike recent CNN models applied to robotic grasping which have been trained on 2D or 2.5D images and limited to a fixed top grasping direction, we exploit the power of a 3D CNN model to estimate suitable grasp poses from multiple grasping directions (top and side directions) and wrist orientations, which has great potential for geometry-related robotic tasks. Our soft hands guided by the 3D CNN algorithm show 87\% successful grasping on previously unseen objects. A set of comparative evaluations shows the robustness of our approach with respect to noise and occlusions.}
    }

2017

  • A. F. Salazar-Gomez, J. DelPreto, S. Gil, F. H. Guenther, and D. Rus, “Correcting robot mistakes in real time using eeg signals,” in 2017 IEEE International Conference on Robotics and Automation (ICRA), 2017. doi:10.1109/ICRA.2017.7989777
    [BibTeX] [Abstract] [Download PDF]

    Communication with a robot using brain activity from a human collaborator could provide a direct and fast feedback loop that is easy and natural for the human, thereby enabling a wide variety of intuitive interaction tasks. This paper explores the application of EEG-measured error-related potentials (ErrPs) to closed-loop robotic control. ErrP signals are particularly useful for robotics tasks because they are naturally occurring within the brain in response to an unexpected error. We decode ErrP signals from a human operator in real time to control a Rethink Robotics Baxter robot during a binary object selection task. We also show that utilizing a secondary interactive error-related potential signal generated during this closed-loop robot task can greatly improve classification performance, suggesting new ways in which robots can acquire human feedback. The design and implementation of the complete system is described, and results are presented for realtime closed-loop and open-loop experiments as well as offline analysis of both primary and secondary ErrP signals. These experiments are performed using general population subjects that have not been trained or screened. This work thereby demonstrates the potential for EEG-based feedback methods to facilitate seamless robotic control, and moves closer towards the goal of real-time intuitive interaction.

    @inproceedings{salazar2017eegcorrecting,
    title={Correcting robot mistakes in real time using eeg signals},
    author={Salazar-Gomez, Andres F and DelPreto, Joseph and Gil, Stephanie and Guenther, Frank H and Rus, Daniela},
    booktitle={2017 IEEE International Conference on Robotics and Automation (ICRA)},
    organization={IEEE},
    year={2017},
    month={May},
    doi={10.1109/ICRA.2017.7989777},
    url={http://groups.csail.mit.edu/drl/wiki/images/e/ec/Correcting_Robot_Mistakes_in_Real_Time_Using_EEG_Signals.pdf},
    abstract={Communication with a robot using brain activity from a human collaborator could provide a direct and fast feedback loop that is easy and natural for the human, thereby enabling a wide variety of intuitive interaction tasks. This paper explores the application of EEG-measured error-related potentials (ErrPs) to closed-loop robotic control. ErrP signals are particularly useful for robotics tasks because they are naturally occurring within the brain in response to an unexpected error. We decode ErrP signals from a human operator in real time to control a Rethink Robotics Baxter robot during a binary object selection task. We also show that utilizing a secondary interactive error-related potential signal generated during this closed-loop robot task can greatly improve classification performance, suggesting new ways in which robots can acquire human feedback. The design and implementation of the complete system is described, and results are presented for realtime closed-loop and open-loop experiments as well as offline analysis of both primary and secondary ErrP signals. These experiments are performed using general population subjects that have not been trained or screened. This work thereby demonstrates the potential for EEG-based feedback methods to facilitate seamless robotic control, and moves closer towards the goal of real-time intuitive interaction.}
    }

2016

  • J. DelPreto, “Let’s make robots! Automated co-generation of electromechanical devices from user specifications using a modular robot library,” Master Thesis, Massachusetts Institute of Technology (MIT), 2016.
    [BibTeX] [Abstract] [Download PDF]

    Personalized on-demand robots have a vast potential to impact daily life and change how people interact with technology, but so far this potential has remained largely untapped. Building robots is typically restricted to experts due to the extensive knowledge, experience, and resources required. This thesis aims to remove these barriers with an end-to-end system for intuitively designing robots from high-level specifications. By describing an envisioned structure or behavior, casual users can immediately build and use a robot for their task. The presented work encourages users to treat robots for physical tasks as they would treat software for computational tasks. By simplifying the design process and fostering an iterative approach, it moves towards the proliferation of on-demand custom robots that can address applications including education, healthcare, disaster aid, and everyday life. Users can intuitively compose modular components from an integrated library into complex electromechanical devices. The system provides design feedback, performs verification, makes any required modifications, and then co-designs the underlying subsystems to generate wiring instructions, mechanical drawings, microcontroller code for autonomous behavior, and user interface software. The current work features printable origami-inspired foldable robots as well as general electromechanical devices, and is extensible to many fabrication techniques. Building upon this foundation, tools are provided that allow users to describe functionality rather than structure, simulate robot systems, and explore design spaces to achieve behavioral guarantees. The presented system allows non-engineering users to rapidly fabricate customized robots, facilitating the proliferation of robots in everyday life. It thereby marks an important step towards the realization of personal robots that have captured imaginations for decades.

    @mastersthesis{delpreto2016letsmakerobots,
    title={Let's make robots! Automated co-generation of electromechanical devices from user specifications using a modular robot library},
    author={DelPreto, Joseph},
    year={2016},
    month={February},
    school={Massachusetts Institute of Technology (MIT)},
    address={Massachusetts Institute of Technology (MIT)},
    url={http://hdl.handle.net/1721.1/103672},
    abstract={Personalized on-demand robots have a vast potential to impact daily life and change how people interact with technology, but so far this potential has remained largely untapped. Building robots is typically restricted to experts due to the extensive knowledge, experience, and resources required. This thesis aims to remove these barriers with an end-to-end system for intuitively designing robots from high-level specifications. By describing an envisioned structure or behavior, casual users can immediately build and use a robot for their task. The presented work encourages users to treat robots for physical tasks as they would treat software for computational tasks. By simplifying the design process and fostering an iterative approach, it moves towards the proliferation of on-demand custom robots that can address applications including education, healthcare, disaster aid, and everyday life. Users can intuitively compose modular components from an integrated library into complex electromechanical devices. The system provides design feedback, performs verification, makes any required modifications, and then co-designs the underlying subsystems to generate wiring instructions, mechanical drawings, microcontroller code for autonomous behavior, and user interface software. The current work features printable origami-inspired foldable robots as well as general electromechanical devices, and is extensible to many fabrication techniques. Building upon this foundation, tools are provided that allow users to describe functionality rather than structure, simulate robot systems, and explore design spaces to achieve behavioral guarantees. The presented system allows non-engineering users to rapidly fabricate customized robots, facilitating the proliferation of robots in everyday life. It thereby marks an important step towards the realization of personal robots that have captured imaginations for decades.}
    }

  • C. Choi, J. DelPreto, and D. Rus, “Using vision for pre- and post-grasping object localization for soft hands,” in 2016 International Symposium on Experimental Robotics (ISER), 2016. doi:10.1007/978-3-319-50115-4_52
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we present soft hands guided by an RGB-D object perception algorithm which is capable of localizing the pose of an object before and after grasping. The soft hands can perform manipulation operations such as grasping and connecting two parts. The flexible soft grippers grasp objects reliably under high uncertainty but the poses of the objects after grasping are subject to high uncertainty. Visual sensing ameliorates the increased uncertainty by means of in-hand object localization. The combination of soft hands and visual object perception enables our Baxter robot, augmented with soft hands, to perform object assembly tasks which require high precision. The effectiveness of our approach is validated by comparing it to the Baxter’s original hard hands with and without the in-hand object localization.

    @inproceedings{choi2016visiongrasping,
    title={Using vision for pre- and post-grasping object localization for soft hands},
    author={Choi, Changhyun and DelPreto, Joseph and Rus, Daniela},
    booktitle={2016 International Symposium on Experimental Robotics (ISER)},
    publisher={Springer International Publishing},
    year={2016},
    doi={10.1007/978-3-319-50115-4_52},
    isbn={978-3-319-50115-4},
    url={https://link.springer.com/content/pdf/10.1007\\978-3-319-50115-4_52.pdf},
    abstract={In this paper, we present soft hands guided by an RGB-D object perception algorithm which is capable of localizing the pose of an object before and after grasping. The soft hands can perform manipulation operations such as grasping and connecting two parts. The flexible soft grippers grasp objects reliably under high uncertainty but the poses of the objects after grasping are subject to high uncertainty. Visual sensing ameliorates the increased uncertainty by means of in-hand object localization. The combination of soft hands and visual object perception enables our Baxter robot, augmented with soft hands, to perform object assembly tasks which require high precision. The effectiveness of our approach is validated by comparing it to the Baxter's original hard hands with and without the in-hand object localization.}
    }

2015

  • J. DelPreto, R. Katzschmann, R. MacCurdy, and D. Rus, “A compact acoustic communication module for remote control underwater,” in Proceedings of the 10th International Conference on Underwater Networks & Systems (WUWNET), 2015. doi:10.1145/2831296.2831337
    [BibTeX] [Abstract] [Download PDF]

    This paper describes an end-to-end compact acoustic communication system designed for easy integration into remotely controlled underwater operations. The system supports up to 2048 commands that are encoded as 16 bit words. We present the design, hardware, and supporting algorithms for this system. A pulse-based FSK modulation scheme is presented, along with a method of demodulation requiring minimal processing power that leverages the Goertzel algorithm and dynamic peak detection. We packaged the system together with an intuitive user interface for remotely controlling an autonomous underwater vehicle. We evaluated this system in the pool and in the open ocean. We present the communication data collected during experiments using the system to control an underwater robot.

    @inproceedings{delpreto2015acoustic,
    title={A compact acoustic communication module for remote control underwater},
    author={DelPreto, Joseph and Katzschmann, Robert and MacCurdy, Robert and Rus, Daniela},
    booktitle={Proceedings of the 10th International Conference on Underwater Networks \& Systems (WUWNET)},
    organization={ACM},
    year={2015},
    month={October},
    doi = {10.1145/2831296.2831337},
    isbn = {978-1-4503-4036-6},
    url={http://groups.csail.mit.edu/drl/wiki/images/f/f9/Delpreto-2015-A_Compact_Acoustic_Communication_Module_for_Remote_Control_Underwater.pdf},
    abstract={This paper describes an end-to-end compact acoustic communication system designed for easy integration into remotely controlled underwater operations. The system supports up to 2048 commands that are encoded as 16 bit words. We present the design, hardware, and supporting algorithms for this system. A pulse-based FSK modulation scheme is presented, along with a method of demodulation requiring minimal processing power that leverages the Goertzel algorithm and dynamic peak detection. We packaged the system together with an intuitive user interface for remotely controlling an autonomous underwater vehicle. We evaluated this system in the pool and in the open ocean. We present the communication data collected during experiments using the system to control an underwater robot.}
    }

  • L. Sanneman, D. Ajilo, J. DelPreto, A. Mehta, S. Miyashita, N. A. Poorheravi, C. Ramirez, S. Yim, S. Kim, and D. Rus, “A distributed robot garden system,” in 2015 IEEE International Conference on Robotics and Automation (ICRA), 2015. doi:10.1109/ICRA.2015.7140058
    [BibTeX] [Abstract] [Download PDF]

    Computational thinking is an important part of a modern education, and robotics provides a powerful tool for teaching programming logic in an interactive and engaging way. The robot garden presented in this paper is a distributed multi-robot system capable of running autonomously or under user control from a simple graphical interface. Over 100 origami flowers are actuated with LEDs and printed pouch motors, and are deployed in a modular array around additional swimming and crawling folded robots. The garden integrates state-of-the-art rapid design and fabrication technologies with distributed systems software techniques to create a scalable swarm in which robots can be controlled individually or as a group. The garden can be used to teach basic algorithmic concepts through its distributed algorithm demonstration capabilities and can teach programming concepts through its education-oriented user interface.

    @inproceedings{sanneman2015garden,
    title={A distributed robot garden system},
    author={Sanneman, Lindsay and Ajilo, Deborah and DelPreto, Joseph and Mehta, Ankur and Miyashita, Shuhei and Poorheravi, Negin Abdolrahim and Ramirez, Cami and Yim, Sehyuk and Kim, Sangbae and Rus, Daniela},
    booktitle={2015 IEEE International Conference on Robotics and Automation (ICRA)},
    organization={IEEE},
    year={2015},
    month={May},
    doi={10.1109/ICRA.2015.7140058},
    ISSN={1050-4729},
    url={https://groups.csail.mit.edu/drl/wiki/images/b/b1/2015_ICRA_Garden.pdf},
    abstract={Computational thinking is an important part of a modern education, and robotics provides a powerful tool for teaching programming logic in an interactive and engaging way. The robot garden presented in this paper is a distributed multi-robot system capable of running autonomously or under user control from a simple graphical interface. Over 100 origami flowers are actuated with LEDs and printed pouch motors, and are deployed in a modular array around additional swimming and crawling folded robots. The garden integrates state-of-the-art rapid design and fabrication technologies with distributed systems software techniques to create a scalable swarm in which robots can be controlled individually or as a group. The garden can be used to teach basic algorithmic concepts through its distributed algorithm demonstration capabilities and can teach programming concepts through its education-oriented user interface.}
    }

  • A. Mehta, J. DelPreto, and D. Rus, “Integrated codesign of printable robots,” Journal of Mechanisms and Robotics, vol. 7, iss. 2, 2015. doi:10.1115/1.4029496
    [BibTeX] [Abstract] [Download PDF]

    This work presents a system by which users can easily create printable origami-inspired robots from high-level structural specifications. Starting from a library of basic mechanical, electrical, and software building blocks, users can hierarchically assemble integrated electromechanical components and programmed mechanisms. The system compiles those designs to cogenerate complete fabricable outputs: mechanical drawings suitable for direct manufacture, wiring instructions for electronic devices, and firmware and user interface (UI) software to control the final robot autonomously or from human input. This process allows everyday users to create on-demand custom printable robots for personal use, without the requisite engineering background, design tools, and cycle time typical of the process today. This paper describes the system and its use, demonstrating its abilities and versatility through the design of several disparate robots.

    @article{mehta2015codesign,
    title={Integrated codesign of printable robots},
    author={Mehta, Ankur and DelPreto, Joseph and Rus, Daniela},
    journal={Journal of Mechanisms and Robotics},
    publisher={American Society of Mechanical Engineers (ASME)},
    volume={7},
    number={2},
    year={2015},
    month={May},
    doi={10.1115/1.4029496},
    url={http://mechanismsrobotics.asmedigitalcollection.asme.org/data/journals/jmroa6/933289/jmr_007_02_021015.pdf},
    abstract={This work presents a system by which users can easily create printable origami-inspired robots from high-level structural specifications. Starting from a library of basic mechanical, electrical, and software building blocks, users can hierarchically assemble integrated electromechanical components and programmed mechanisms. The system compiles those designs to cogenerate complete fabricable outputs: mechanical drawings suitable for direct manufacture, wiring instructions for electronic devices, and firmware and user interface (UI) software to control the final robot autonomously or from human input. This process allows everyday users to create on-demand custom printable robots for personal use, without the requisite engineering background, design tools, and cycle time typical of the process today. This paper describes the system and its use, demonstrating its abilities and versatility through the design of several disparate robots.}
    }

  • A. M. Mehta, J. DelPreto, K. W. Wong, S. Hamill, H. Kress-Gazit, and D. Rus, “Robot creation from functional specifications,” in The International Symposium on Robotics Research (ISRR), , 2015. doi:10.1007/978-3-319-60916-4_36
    [BibTeX] [Abstract] [Download PDF]

    The design of new robots is often a time-intensive task requiring multi-disciplinary expertise, making it difficult to create custom robots on demand. To help address these issues, this work presents an integrated end-to-end system for rapidly creating printable robots from a Structured English description of desired behavior. Linear temporal logic (LTL) is used to formally represent the functional requirements from a structured task specification, and a modular component library is used to ground the propositions and generate structural specifications; complete mechanical, electrical, and software designs are then automatically synthesized. The ability and versatility of this system are demonstrated by sample robots designed in this manner.

    @inbook{mehta2015robotcreation,
    title={Robot creation from functional specifications},
    author={Mehta, Ankur M and DelPreto, Joseph and Wong, Kai Weng and Hamill, Scott and Kress-Gazit, Hadas and Rus, Daniela},
    booktitle={The International Symposium on Robotics Research (ISRR)},
    year={2015},
    doi={10.1007/978-3-319-60916-4_36},
    isbn={978-3-319-60916-4},
    url={https://link.springer.com/content/pdf/10.1007\\978-3-319-60916-4_36.pdf},
    abstract={The design of new robots is often a time-intensive task requiring multi-disciplinary expertise, making it difficult to create custom robots on demand. To help address these issues, this work presents an integrated end-to-end system for rapidly creating printable robots from a Structured English description of desired behavior. Linear temporal logic (LTL) is used to formally represent the functional requirements from a structured task specification, and a modular component library is used to ground the propositions and generate structural specifications; complete mechanical, electrical, and software designs are then automatically synthesized. The ability and versatility of this system are demonstrated by sample robots designed in this manner.}
    }

2014

  • A. M. Mehta, J. DelPreto, B. Shaya, and D. Rus, “Cogeneration of mechanical, electrical, and software designs for printable robots from structural specifications,” in 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2014. doi:10.1109/IROS.2014.6942960
    [BibTeX] [Abstract] [Download PDF]

    Designing and fabricating new robotic systems is typically limited to experts, requiring engineering background, expensive tools, and considerable time. In contrast, to facilitate everyday users developing custom robots for personal use, this work presents a new system to easily create printable foldable robots from high-level structural specifications. A user merely needs to select electromechanical components from a library of basic building blocks and pre-designed mechanisms, then connect them to define custom robot assemblies. The system then generates complete mechanical drawings suitable for fabrication, instructions for the assembly of electronics, and software to control and drive the final robot. Several robots designed in this manner demonstrate the ability and versatility of this process.

    @inproceedings{mehta2014cogeneration,
    title={Cogeneration of mechanical, electrical, and software designs for printable robots from structural specifications},
    author={Mehta, Ankur M and DelPreto, Joseph and Shaya, Benjamin and Rus, Daniela},
    booktitle={2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    organization={IEEE},
    year={2014},
    month={September},
    doi={10.1109/IROS.2014.6942960},
    ISSN={2153-0858},
    url={http://people.csail.mit.edu/mehtank/webpubs/iros2014.pdf},
    abstract={Designing and fabricating new robotic systems is typically limited to experts, requiring engineering background, expensive tools, and considerable time. In contrast, to facilitate everyday users developing custom robots for personal use, this work presents a new system to easily create printable foldable robots from high-level structural specifications. A user merely needs to select electromechanical components from a library of basic building blocks and pre-designed mechanisms, then connect them to define custom robot assemblies. The system then generates complete mechanical drawings suitable for fabrication, instructions for the assembly of electronics, and software to control and drive the final robot. Several robots designed in this manner demonstrate the ability and versatility of this process.}
    }

  • J. DelPreto, A. M. Mehta, and D. Rus, “Extended Abstract: Cogeneration of Electrical and Software Designs from Structural Specifications,” in Robot Makers Workshop, Robotics: Science and Systems (RSS), 2014.
    [BibTeX] [Abstract]

    While technology has increasingly permeated modern society through devices such as laptops and smartphones, the goal of personal robotics remains largely elusive. Current methods for designing, fabricating, and programming robots require extensive knowledge, time, and resources which prevent the general public from enjoying their potential benefits. A system that uses high-level functional descripions to automatically and quickly generate inexpensive robot designs, including relevant instructions, fabrication files, electrical layouts, low-level software, and user interfaces, could therefore have a significant impact on incorporating robots into daily life. While a casual user desiring to create a robot for a personal task would likely have a rough idea of what the robot should look like, the necessary electronics and software are often more abstract and harder to define. While systems such as Lego Mindstorms ([1]) or LittleBits ([2]) provide modular frameworks for creating devices, they typically require additional programming by the user or offer a limited range of modules. To help address such issues, a system is presented which aims to automatically generate electrical designs, control software, and user interfaces from more intuitive structural specifications.

    @inproceedings{delpreto2014cogeneration,
    title={Extended Abstract: Cogeneration of Electrical and Software Designs from Structural Specifications},
    author={DelPreto, Joseph and Mehta, Ankur M and Rus, Daniela},
    booktitle={Robot Makers Workshop, Robotics: Science and Systems (RSS)},
    year={2014},
    month={June},
    abstract={While technology has increasingly permeated modern society through devices such as laptops and smartphones, the goal of personal robotics remains largely elusive. Current methods for designing, fabricating, and programming robots require extensive knowledge, time, and resources which prevent the general public from enjoying their potential benefits. A system that uses high-level functional descripions
    to automatically and quickly generate inexpensive robot designs, including relevant instructions, fabrication files, electrical layouts, low-level software, and user interfaces, could therefore have a significant impact on incorporating robots into daily life.
    While a casual user desiring to create a robot for a personal task would likely have a rough idea of what the robot should look like, the necessary electronics and software are often more abstract and harder to define. While systems such as Lego Mindstorms ([1]) or LittleBits ([2]) provide modular frameworks for creating devices, they typically require additional programming by the user or offer a limited
    range of modules. To help address such issues, a system is presented which aims to automatically generate electrical designs, control software, and user interfaces from more intuitive structural specifications.}
    }

2011

  • L. Wang, J. DelPreto, S. Bhattacharyya, J. Weisz, and P. K. Allen, “A highly-underactuated robotic hand with force and joint angle sensors,” in 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2011. doi:10.1109/IROS.2011.6095147
    [BibTeX] [Abstract] [Download PDF]

    This paper describes a novel underactuated robotic hand design. The hand is highly underactuated as it contains three fingers with three joints each controlled by a single motor. One of the fingers (“thumb”) can also be rotated about the base of the hand, yielding a total of two controllable degrees-of-freedom. A key component of the design is the addition of position and tactile sensors which provide precise angle feedback and binary force feedback. Our mechanical design can be analyzed theoretically to predict contact forces as well as hand position given a particular object shape.

    @inproceedings{wang2011underactuated,
    title={A highly-underactuated robotic hand with force and joint angle sensors},
    author={Wang, Long and DelPreto, Joseph and Bhattacharyya, Sam and Weisz, Jonathan and Allen, Peter K},
    booktitle={2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    organization={IEEE},
    year={2011},
    month={September},
    doi={10.1109/IROS.2011.6095147},
    url={https://academiccommons.columbia.edu/doi/10.7916/D8B56TZM/download},
    abstract={This paper describes a novel underactuated robotic hand design. The hand is highly underactuated as it contains three fingers with three joints each controlled by a single motor. One of the fingers (“thumb”) can also be rotated about the base of the hand, yielding a total of two controllable degrees-of-freedom. A key component of the design is the addition of position and tactile sensors which provide precise angle feedback and binary force feedback. Our mechanical design can be analyzed theoretically to predict contact forces as well as hand position given a particular object shape.}
    }

Click to expand patents

2023

  • D. Rus and J. DelPreto, “Robot Training System,” Filed, 2023.
    [BibTeX]
    @article{delpreto2023patentRobotTraininSystem,
    title={Robot Training System},
    author={Rus, Daniela and DelPreto, Joseph},
    year={2023},
    month={December},
    journal={Filed},
    }

2022

  • D. Rus and J. DelPreto, “Robot Training System,” Filed, 2022.
    [BibTeX]
    @article{rus2021patentRobotTraininSystem,
    title={Robot Training System},
    author={Rus, Daniela and DelPreto, Joseph},
    year={2022},
    month={December},
    journal={Filed},
    }

2020

  • J. Bingham, T. Alexander, B. Homberg, and J. DelPreto, “Robot Grip Detection Using Non-Contact Sensors,” Issued: US Patent 10,792,809, 2020.
    [BibTeX] [Download PDF]
    @article{bingham2020patentGripDetection,
    title={Robot Grip Detection Using Non-Contact Sensors},
    author={Bingham, Jeffrey and Alexander, Taylor and Homberg, Bianca and DelPreto, Joseph},
    journal={Issued: {US} Patent 10,792,809},
    year={2020},
    month={October},
    URL={https://patents.google.com/patent/US10792809B2/en}
    }

  • J. Bingham, T. Alexander, B. Homberg, J. DelPreto, and A. Shafer, “Sensorized Robotic Gripping Device,” Issued: US Patent 10,682,774, 2020.
    [BibTeX] [Download PDF]
    @article{bingham2020patentSensorizedGripper,
    title={Sensorized Robotic Gripping Device},
    author={Bingham, Jeffrey and Alexander, Taylor and Homberg, Bianca and DelPreto, Joseph and Shafer, Alex},
    journal={Issued: {US} Patent 10,682,774},
    year={2020},
    month={June},
    URL={https://patents.google.com/patent/US10682774B2/en}
    }

2018

  • D. Rus, J. DelPreto, and A. Mehta, “Systems and Methods for Compiling Robotic Assemblies,” Issued: US Patent 10,071,487, 2018.
    [BibTeX] [Download PDF]
    @article{rus2018patentCompiler,
    title={Systems and Methods for Compiling Robotic Assemblies},
    author={Rus, Daniela and DelPreto, Joseph and Mehta, Ankur},
    journal={Issued: {US} Patent 10,071,487},
    year={2018},
    month={September},
    URL={https://patents.google.com/patent/US10071487B2/en}
    }


Projects