Learning a thousand tasks in a day
Humans are remarkably efficient at learning tasks from demonstrations, but today's imitation learning methods for robot manipulation often require hundreds or thousands of demonstrations per task. We investigated two fundamental priors for improving learning efficiency: decomposing manipulation trajectories into sequential alignment and interaction phases and retrieval-based generalization. Through 3450 real-world rollouts, we systematically studied this decomposition. We compared different design choices for the alignment and interaction phases and examined generalization and scaling trends relative to today's dominant paradigm of behavioral cloning with a single-phase monolithic policy. In the few-demonstrations-per-task regime (<10 demonstrations), decomposition achieved an order of magnitude of improvement in data efficiency over single-phase learning, with retrieval consistently outperforming behavioral cloning for both alignment and interaction. Building on these insights, we developed Multi-Task Trajectory Transfer (MT3), an imitation learning method based on decomposition and retrieval. MT3 learns everyday manipulation tasks from as little as a single demonstration each while also generalizing to previously unseen object instances. This efficiency enabled us to teach a robot 1000 distinct everyday tasks in under 24 hours of human demonstrator time. Through 2200 additional real-world rollouts, we reveal MT3's capabilities and limitations across different task families.
Metamaterial robotics
Mechanical metamaterials with customized microstructures are increasingly shaping robotic design and functionality, enabling the integration of sensing, actuation, control, and computation within the robot body. This Review outlines how metamaterial design principles-mechanics-inspired architectures, shape-reconfigurable structures, and material-driven functionality-enhance adaptability and distributed intelligence in robotics. We also discuss how artificial intelligence supports metamaterial robotics in design, modeling, and control, advancing systems with complex sensory feedback, learning capability, and adaptive physical interactions. This Review aims to inspire the community to explore the transformative potential of metamaterial robotics, fostering innovations that bridge the gap between materials engineering and intelligent robotics.
Robotic cross-pollination of genetically modified flowers
Engineered tomato plants produced flowers with visible stigmas that a robot could detect and pollinate faster than a human.
The robots in and are as amazing as the superheroes
Robot assistants in and may not save the world, but they fulfill six different jobs.
Deep domain adaptation eliminates costly data required for task-agnostic wearable robotic control
Data-driven methods have transformed our ability to assess and respond to human movement with wearable robots, promising real-world rehabilitation and augmentation benefits. However, the proliferation of data-driven methods, with the associated demand for increased personalization and performance, requires vast quantities of high-quality, device-specific data. Procuring these data is often intractable because of resource and personnel costs. We propose a framework that overcomes data scarcity by leveraging simulated sensors from biomechanical models to form a stepping-stone domain through which easily accessible data can be translated into data-limited domains. We developed and optimized a deep domain adaptation network that replaces costly, device-specific, labeled data with open-source datasets and unlabeled exoskeleton data. Using our network, we trained a hip and knee joint moment estimator with performance comparable to a best-case model trained with a complete, device-specific dataset [incurring only an 11 to 20%, 0.019 to 0.028 newton-meters per kilogram (Nm/kg) increase in error for a semisupervised model and 20 to 44%, 0.033 to 0.062 Nm/kg for an unsupervised model]. Our network significantly outperformed counterpart networks without domain adaptation (which incurred errors of 36 to 45% semisupervised and 50 to 60% unsupervised). Deploying our models in the real-time control loop of a hip/knee exoskeleton ( = 8) demonstrated estimator performance similar to offline results while augmenting user performance based on those estimated moments (9.5 to 14.6% metabolic cost reductions compared with no exoskeleton). Our framework enables researchers to train real-time deployable deep learning, task-agnostic models with limited or no access to labeled, device-specific data.
Erratum for the Research Article "A lightweight robotic leg prosthesis replicating the biomechanics of the knee, ankle, and toe joint" by M. Tran
Adaptive humanlike grasping
Rich tactile embodiment enables robotic hands to perform grasping tasks with humanlike adaptability.
Foldable and rollable interlaced structure for deployable robotic systems
Extendable structures often use rollable designs, with long, flexible materials that can be wound onto a hub for storage without the need for joints. However, achieving high stiffness and strength in the extended state while keeping the hub compact is challenging, given that stiff structures are difficult to bend and typically require larger hubs for storage. Here, we introduce a corrugated sheet-shaped foldable design that enables Z-folding by connecting multiple strips in parallel. The unfolded, corrugated form structure offers a high load-bearing capacity, and the folded, stacked form structure can be smoothly rolled onto a hub, enabling fold-and-roll storage. The key innovation is the formation of an interlaced origami structure by connecting strips through a ribbon-weaving technique. This interlacing design enables both localized flexibility and mutual constraints between strips: The localized flexibility accommodates perimeter differences between stacked strips during rolling, and the densely repeated mutual constraints make the corrugation resist excessive deformation under external forces. Using these structures, we made two deployable mobile robots: one with a 1.6-meter deployable arm for shelving tasks and another with a tetrahedral deployable frame that supported a meter-scale 3D-printing system. Our results showcase the potential of this interlaced, corrugated approach for deployable robotic systems requiring both compactness and strength.
Robotic manipulation of human bipedalism reveals overlapping internal representations of space and time
Effective control of bipedal postures relies on sensory inputs from the past, which encode dynamic changes in the spatial properties of our movement over time. To uncover how the spatial and temporal properties of an upright posture interact in the perception and control of standing balance, we implemented a robotic virtualization of human body dynamics to systematically alter inertia and viscosity as well as sensorimotor delays in 20 healthy participants. Inertia gains below one or negative viscosity gains led to larger postural oscillations and caused participants to exceed virtual balance limits, mimicking the disruptive effects of an additional 200-millisecond sensorimotor delay. When balancing without delays, participants adjusted their inertia gains to below one and viscosity gains to negative values to match the perception of balancing with an imposed delay. When delays were present, participants increased inertia gains above one and used positive viscosity gains to align their perception with baseline balance. Building on these findings, 10 naïve participants exhibited improved balance stability and reduced the number of instances they exceeded the limits when balancing with a 200-millisecond delay compensated by inertia gains above one and positive viscosity gains. These results underscore the importance of innovative robotic virtualizations of standing balance to reveal the interconnected representations of space and time that underlie the stable perception and control of bipedal balance. Robotic manipulation of body physics offers a transformative approach to understanding how the nervous system processes spatial information over time and could address clinical sensorimotor deficits associated with delays.
The microDelta: Downscaling robot mechanisms enables ultrafast and high-precision movement
Physical scaling laws predict that miniaturizing robotic mechanisms should enable exceptional robot performance in metrics such as speed and precision. Although these scaling laws have been explored in a variety of microsystems, the benefits and limitations of downscaling three-dimensional (3D) robotic mechanisms have yet to be assessed because of limitations in microscale 3D manufacturing. In this work, we used the Delta robot as a case study for these scaling laws. We present two sizes of 3D-printed Delta robots, the microDeltas, measuring 1.4 and 0.7 millimeters in height, which demonstrate state-of-the-art performance in both size and speed compared with previously reported Delta robots. Printing with two-photon polymerization and subsequent metallization enabled the miniaturization of these 3D robotic parallel mechanisms integrated with electrostatic actuators for achieving high bandwidths. The smallest microDelta was able to operate at more than 1000 hertz and achieved precisions of less than 1 micrometer by taking advantage of its small size. The microDelta's relatively high output power was demonstrated with the launch of a small projectile, highlighting the utility of miniaturized robotic systems for applications ranging from manufacturing to haptics.
Sight Guide demonstrates robotics-inspired vision assistance at the Cybathlon
A mobile-robotics-based navigation and perception system guided a visually impaired pilot through complex tasks at Cybathlon.
Artificial embodied circuits uncover neural architectures of vertebrate visuomotor behaviors
Brains evolve within specific sensory and physical environments, yet neuroscience has traditionally focused on studying neural circuits in isolation. Understanding of their function requires integrative brain-body testing in realistic contexts. To investigate the neural and biomechanical mechanisms of sensorimotor transformations, we constructed realistic neuromechanical simulations (simZFish) of the zebrafish optomotor response, a visual stabilization behavior. By computationally reproducing the body mechanics, physical body-water interactions, hydrodynamics, visual environments, and experimentally derived neural network architectures, we closely replicated the behavior of real larval zebrafish. Through systematic manipulation of physiological and circuit connectivity features, impossible in biological experiments, we demonstrate how embodiment shapes neural activity, circuit architecture, and behavior. Changing lens properties and retinal connectivity revealed why the lower posterior visual field drives optimal optomotor responses in the simZFish, explaining receptive field properties observed in real zebrafish. When challenged with novel visual stimuli, the simZFish predicted previously unknown neuronal response types, which we identified via two-photon calcium imaging in the live brains of real zebrafish and incorporated to update the simZFish neural network. In virtual rivers, the simZFish performed rheotaxis autonomously by using current-induced optic flow patterns as navigational cues, compensating for the simulated water flow. Last, experiments with a physical robot (ZBot) validated the role of embodied sensorimotor circuits in maintaining position in a real river with complex fluid dynamics and visual environments. By iterating between simulations, behavioral observations, neural imaging, and robotic testing, we demonstrate the power of integrative approaches to investigating sensorimotor processing, providing insights into embodied neural circuit functions.
Spring-loaded DNA origami arrays as energy-supplied hardware for modular nanorobots
DNA origami nanorobots allow for the rational design of nanomachines that respond to environmental stimuli with preprogrammed tasks. To date, this mostly is achieved by constructing two-state switches that, upon activation, change their conformation, resulting in the performance of an operation. Their applicability is often limited to a single, specific stimulus-output combination because of their intrinsic properties as two-state systems only. This makes expanding them further challenging. Here, we addressed this limitation by introducing reconfigurable DNA origami arrays as networks of coupled two-state systems. This universal design strategy enables the integration of various operational units into any two-state system within the nanorobot, allowing it to process multiple stimuli, compute responses using multilevel Boolean logic, and execute a range of operations with controlled order, timing, and spatial position. We anticipate that this strategy will be instrumental in further developing DNA origami nanorobots for applications in various technological fields.
Tactile displays driven by projected light
Tactile displays that lend tangible form to digital content could transform computing interactions. However, achieving the resolution, speed, and dynamic range needed for perceptual fidelity remains challenging. We present a dynamic tactile display that directly converts projected light into visible and tactile patterns via a photomechanical surface populated with millimeter-scale optotactile pixels. The pixels transduce incident light into mechanical displacements through photostimulated thermal gas expansion, yielding millimeter-scale displacements with response times of 2 to 100 milliseconds. The use of projected light for power transmission and addressing renders these displays highly scalable. We demonstrate optically driven displays with up to 1511 addressable pixels, several times more pixels than prior tactile displays attaining comparable performance. Perceptual studies confirm that these displays can reproduce diverse spatiotemporal tactile patterns with high fidelity. This research establishes a foundation for practical and versatile high-resolution tactile displays driven by light.
Flow-driven magnetic microcatheter for superselective arterial embolization
Minimally invasive interventions performed inside brain vessels with the synergistic use of microcatheters pushed over guidewires have revolutionized the way aneurysms, strokes, arteriovenous malformations, brain tumors, and other cerebrovascular conditions are being treated. However, a substantial portion of the brain vasculature remains inaccessible because the conventional catheterization technique based on transmitting forces from the proximal to the distal end of the instruments imposes stringent constraints on their diameter and stiffness. Here, we overcame this mechanical barrier by microengineering ultraminiaturized magnetic microcatheters in the form of an inflatable flat tube, making them ultraflexible and capable of harnessing the kinetic energy of blood flow for endovascular navigation. We introduce a compact and versatile magnetic steering platform that is compatible with conventional biplane fluoroscope imaging and demonstrate safe and effortless navigation and tracking of hard-to-reach, distal, tortuous arteries that are as small as 180 micrometers in diameter with a curvature radius as small as 0.69 millimeters. Furthermore, we demonstrate the superselective infusion of contrast and embolic liquid agents, all in a porcine model. These results pave the way to reach, diagnose, and treat currently inaccessible distal arteries that may be at risk of bleeding or feeding a tumor. Our endovascular technology can also be used to selectively target tissues for drug or gene delivery from within the arteries, not only in the central and peripheral nervous systems but also in almost any other organ system, with improved accuracy, speed, and safety.
An ingestible capsule for luminance-based diagnosis of mesenteric ischemia
Acute mesenteric ischemia (AMI) results from insufficient blood flow to the intestines, leading to tissue necrosis with high morbidity and mortality. Diagnosis is often delayed because of nonspecific symptoms that mimic common gastrointestinal conditions. Current diagnostic methods, such as computed tomography and mesenteric angiography, are complex, costly, and invasive, highlighting the need for a rapid, accessible, and minimally invasive alternative. Here, we present FIREFLI (finding ischemia via reflectance of light), a bioinspired, ingestible capsule designed for luminance-based diagnosis of AMI. Upon ingestion, the device activates in the small intestine's pH environment, emitting pulses from three radially spaced white light-emitting diodes and measuring reflected light across 10 wavelengths. FIREFLI then computes a tissue luminance biomarker, which outperforms color-change biomarkers because of superior intrasubject consistency. The diagnosis is processed onboard and wirelessly transmitted to an external mobile device. In vivo studies in swine ( = 9) demonstrated a diagnostic accuracy of 90%, with a sensitivity of 98% and specificity of 85%. By providing a noninvasive, real-time diagnostic solution, FIREFLI has the potential to facilitate earlier detection and treatment of AMI, ultimately improving patient outcomes.
Bioinspired photoresponsive soft robotic lens
Vision is a critical sensory function for humans, animals, and engineered systems, enabling environmental perception essential for imaging and autonomous operation. Although bioinspired, tunable optical systems have advanced adaptability and performance, challenges remain in achieving biocompatibility, robust yet flexible construction, and specialized multifunctionality. Here, we present a photoresponsive hydrogel soft lens (PHySL) that combines optical tunability, an all-solid configuration, and high resolution. PHySL leverages a dynamic hydrogel actuator that autonomously harnesses optical energy, enabling substantial focal tuning through all-optical control. Beyond mimicking biological vision, the system achieves advanced functionalities, including focus control, wavefront engineering, and optical steering by responding to spatiotemporal light stimuli. PHySL highlights the potential of optically powered soft robotics applied in soft vision systems, autonomous soft robots, adaptive medical devices, and next-generation wearable systems.
The Omnia bionic leg with a semipowered knee and ankle wins the Cybathlon 2024 leg prosthesis race
Rehab Tech's Omnia prosthesis excelled at Cybathlon, showcasing advanced lower-limb prostheses and user-centered innovation.
Agile and cooperative aerial manipulation of a cable-suspended load
Quadrotors can carry slung loads to hard-to-reach locations at high speed. Given that a single quadrotor has limited payload capacities, using a team of quadrotors to collaboratively manipulate the full pose of a heavy object is a scalable and promising solution. However, existing control algorithms for multilifting systems only enable low-speed and low-acceleration operations because of the complex dynamic coupling between quadrotors and the load, limiting their use in time-critical missions such as search and rescue. In this work, we present a solution to substantially enhance the agility of cable-suspended multilifting systems. Unlike traditional cascaded solutions, we introduce a trajectory-based framework that solves the whole-body kinodynamic motion planning problem online, accounting for the dynamic coupling effects and constraints between the quadrotors and the load. The planned trajectory is provided to the quadrotors as a reference in a receding-horizon fashion and is tracked by an onboard controller that observes and compensates for the cable tension. Real-world experiments demonstrate that our framework can achieve at least eight times greater acceleration than state-of-the-art methods to follow agile trajectories. Our method can even perform complex maneuvers such as flying through narrow passages at high speed. In addition, it exhibits high robustness against load uncertainties and wind disturbances and does not require adding any sensors to the load, demonstrating strong practicality.
Team BeAGain's journey toward Cybathlon 2024 and holistic mobility with a robotic rehabilitation bicycle
Team BeAGain's development of the whole-body FES robotic bicycle and triumph at Cybathlon 2024 are presented.
