Workshop Schedule
Day 1: Thursday, April 2nd
07:30 - 08:30
08:30 - 10:00 10:00 - 10:20 10:20 - 11:50 11:50 - 13:10 13:10 - 14:40 14:40 - 15:00 15:00 - 16:30 16:30 - 17:30 17:30 - 18:00 |
Breakfast
Session 1: Grand Unifications in Locomotion and Manipulation Speakers: Matt Mason, Russ Tedrake, Aaron Johnson, Luis Sentis Chair: Koushil Sreenath & Alberto Rodriguez Coffee Break Session 2: High Dimensional Locomotion and Manipulation Speakers: Mystery Speaker, Dan Goldman, Pieter Abeel, Chris Atkeson Chair: Koushil Sreenath & Alberto Rodriguez Lunch Break Session 3: Disrupting Sensor Technologies Speakers: Gerald Loeb, Rob Howe, Lael Odhner, Katherine Kuchenbecker Chair: Rob Platt Coffee Break Session 4: Whole-body Manipulation with Error Handling and Recovery Speakers: Jerry Pratt, Rod Grupen, Ambarish Goswami, Katie Byl Chair: Chris Atkeson Breakout Sessions
|
Day 2: Friday, April 3rd.
07:30 - 08:30
08:30 - 10:00 10:00 - 10:15 10:15 - 11:45 11:45 - 12:00 12:00 - 13:30 13:30 - 14:15 14:15 - 14:35 14:35 - 15:00 15:00 - 15:30 |
Breakfast
Session 5: Contact-Rich Interactions and Contact Awareness Speakers: Emo Todorov, Oliver Brock, Kevin Lynch, David Remy Chair: Jeff Trinkle Coffee Break Session 6: Mechanical Intelligence vs Control Authority Speakers: Jonathan Hurst, Sangbae Kim, Aaron Dollar Chair: Andy Ruina Break - Grab Boxed Lunch Session 6 contd. (with boxed lunch) : Mechanical Intelligence vs Control Authority Speakers: Maximo Roa, Mark Cutkosky, Al Rizzi with Marc Raibert Chair: Andy Ruina Breakout Sessions
What's Next? Coffee Break |
Detailed Session Description
Session 1: Grand Unifications in Locomotion and Manipulation
Chair: Koushil Sreenath and Alberto Rodriguez
Speakers:
- Why this workshop, and why now? What are the similarities and differences in tools and approaches between locomotion and manipulation? What are the grand challenges in locomotion, and in manipulation? What does a Locomotion expert think about the Manipulation community and vice versa?
Chair: Koushil Sreenath and Alberto Rodriguez
Speakers:
Matt Mason
[Slides - PDF] |
Locomotion versus Manipulation
This talk is a very difficult one for me to give. Alberto and Koushil asked me to reveal my ideas about locomotion, as viewed by a manipulationist. For years I have hidden my true feelings about locomotionists, but I have decided to sacrifice my own comfort for the good of the field. So the theme of the talk is that locomotionists are knuckleheads. |
Russ Tedrake
[Slides - PDF] |
Manipulation is just walking (upside-down)
I will attempt to start the workshop by giving the basic formalism that is common to manipulation and locomotion : producing contact forces to move a (rigid) body from pose A to pose B. I'll argue that it is only the constraints on these forces and relative uncertainty about the problem (e.g. kinematics and dynamics) which have caused the two fields to have emphasized different formulations historically -- for instance, there are a few tricks that work for locomotion (like the zero-moment-point formulations) which aren't as natural for manipulation. With hands getting more underactuated and locomotion machines moving through more complex and uncertain terrain and using more whole-body approaches, this gap is narrowing quickly. Finally, I'll show a few approaches to humanoid planning and control that we've started applying to grasping. |
Aaron Johnson
[Slides - PDF] |
(Self-)Manipulation Challenges
Most if not all of the challenges in getting a manipulation system to do something useful in the real world are, as has often been noted, the same challenges that exist for a locomotions system. Indeed physics does not care what we call such problems. Specifically, both types of systems must work despite ever-present uncertainty, deal with cluttered environments, run at relevant operational speeds, and, of course, do so within their finite power limits. Inherent to both manipulation and locomotion systems is an underlying simplicial topological of contacts, and it remains an open question as to when and whether this structure is useful in addressing any of the challenges to reliable operation of these systems. In order to make exact this unified view of manipulation and locomotion, "self-manipulation" consists of a set of modeling decisions that permit a body-centric choice of coordinates and formally generates correct equations of motion across all contact modes. By staying as close as possible to manipulation, self-manipulation models can directly use the tools and insights developed in that more mature field and make clear the subtle but important differences between these two classes of systems. |
Luis Sentis
[Slides - PDF] |
Experiences on maturing Whole-Body Operational Space Control and implementing it into agile bipedal systems
With a background on manipulation frameworks, i.e. Whole-Body Operational Space Control, I will explain my recent ventures on extending and implementing this model-based feedback controller in manipulation and locomotion systems. In particular, I will discuss the main differences between implementing the algorithm in agile legged robots to deal with their fast dynamics and quick contact changes with respect to the experiences on manipulation systems. As whole-body control frameworks become increasingly complex, the latencies incurred in centralized controllers take a toll in the realtime feedback performance. I will discuss the impact of latencies on whole-body controllers and motivate the potential benefits of distributed whole-body control architectures. Finally, I will discuss our software integration efforts with a focus on usability and compatibility with modern libraries and the ROS middleware. |
Session 2: High Dimensional Locomotion and Manipulation
- State-of-the-art methods for locomotion and manipulation can be pushed to deal with systems up to 50 DOF, which is fairly impressive compared to what we could achieve just a few years ago. How can we push this dimensionality even further, perhaps all the way to infinity? How do we locomote with soft structures? How do we manipulate deformable objects? How do we plan and control continuum robots? Ex.: Manipulating ropes, clothes, and fluids, locomoting modes for soft robots such as baymax.
Chair: Koushil Sreenath & Alberto Rodriguez
Speakers:
Dan Goldman
[Slides - PDF] |
Locomotion and manipulation on, in, and of deformable granular media
My group is interested in problems involving organisms and physical models of organisms (robots) interacting with terrestrial substrates like sand and soil; such materials can flow and solidify during locomotion or manipulation. Here (following the questions posed by the organizers) I will discuss three different studies: 1. “How do we plan and control continuum robots?” To address this, I will discuss our recent work [Marvi et al, Science 2014; Astley et al, PNAS, 2015] discovering principles of effective locomotion of sidewinding snakes on dry granular media. I will show how (in collaboration with Prof. Howie Choset’s group at Carnegie Mellon) we have used a multi-module robot to reveal how the snakes modulate orthogonal body waves (control templates) to manipulate the substrate (e.g. remain below the yield stress) to climb sandy slopes and perform turning maneuvers. 2. “How can we push this dimensionality [of current systems which can have up to 50 DOF] even further, perhaps all the way to infinity?” To address this, I will discuss our work (again in collaboration with Choset’s group) in applying geometric mechanics methods to model swimming locomotion in granular media. I will first discuss how predictions from theory agree well laboratory experimental tests, and allow prediction of optimal gaits of few DOF systems (e.g. a 3-link Purcell sand-swimming robot) [Hatton et al, PRL 2013]. I will briefly discuss some new results from Choset’s group [Gong et al, in prep, 2015] which allow the geometric techniques (and accompanying visualization tools) to be applied to higher DOF swimmers through an optimal bases scheme. 3. “How do we manipulate deformable objects?” To address this, I will discuss our recent work on collective excavation of subsurface structures by fire ants [Gravish et al, Interface 2012; Gravish et al, PNAS 2013; Monaenkova, J. Exp. Biol. 2015]. These few mm long animals dig meter deep topologically complex networks of tunnels through the repeated process of excavation of soil “pellets”. We have discovered that these seemingly simple invertebrates possess sophisticated manipulation behaviors (involving use of jaws, limbs and even antennae) which allow them to create and shape similarly sized pellets (and thus nests) in a diversity of soils, from fine clay to coarse sand. We posit that this pellet size balances the individual’s load-carrying ability with the need to carry this pellet through confined crowded tunnels. I will conclude the talk by briefly mentioning some ideas on what we are calling robophysics [Aguilar et al, in prep, Rep. Prog. Physics, 2015], arguing that robotics can benefit from systematic experimental tests of physical models to discover fundamental principles of locomotion (and manipulation). |
Pieter Abeel
[Slides - PDF] |
Deep Reinforcement Learning for Locomotion, Manipulation (and Other Visuo-Motor Tasks)
Reinforcement learning addresses the problem of learning controllers and is in principle applicable independent of the application area, whether it’s locomotion, manipulation, or even flight or swimming. However, in practice reinforcement learning successes have heavily relied on domain-specific expert engineering of control architectures and/or models. In this talk I will advocate that, similar to how deep learning multi-layer architectures has enabled significantly outperforming prior approaches in supervised learning tasks, such as visual recognition and speech recognition, deep reinforcement learning has the potential to enable significant advances in robotics across locomotion, manipulation, and just as well flight, or swimming. I will start by a quick review of deep learning, I will describe the DeepMind DQN results on learning to play Atari games from raw pixels (without access to the underlying game state), I will describe further advances since, and I will describe promising results on simulated and real robotic systems in locomotion and in manipulation. I will conclude by describing limitations and open problems. |
Chris Atkeson
[Slides - PDF] |
Low DOF Control of High DOF Systems
We use low DOF control to control high DOF systems like the Atlas and Sarcos humanoids. Simplified models of these robots are used to plan contact forces and center of mass trajectories. Inverse dynamics is used to convert the low DOF plan into a high DOF plan. We believe that the low DOF plan is the key to good performance, not the inverse dynamics. We are also looking at the control of very high DOF systems such as liquids and granular materials. Humans seem to use low DOF control to manipulate these materials. We will describe our initial efforts in getting robots to imitate human strategies. Finally, we will describe how animators use low DOF models to specify the motion of high DOF systems, such as the inflatable robot Baymax in the movie Big Hero 6. |
Session 3: Disruptive Sensor Technologies
- The irruption of massively available lidar or RGBD sensors has changed visual perception. These new sensors provide quantitatively and qualitatively better data than previous ones, which has dramatically changed the capabilities and complexity of perception systems. Could a similar breakthrough in touch/contact sensing make contact understanding also dramatically easier? Do we have sufficient skin sensing technology already? If we would have access to accurate, reliable, and real time data, are we ready to close the loop with it in manipulation and locomotion?
Chair: Rob Platt
Speakers:
Gerald Loeb
|
Biomimetic Machine Touch for Dexterous Robotic and Prosthetic Hands
Machine vision has been applied successfully to industrial robots. Commercially available CCD video cameras capture images of objects being manipulated and computer algorithms extract information to make decisions about their handling. Is this a model for haptically enabled robots? No. Haptics is essentially collision management. No matter what tactile sensing modality is employed, the events that will be sensed depend on the mechanical properties of the appendage that contains the sensors and on the active movement that causes the collisions with an object. Humanlike dexterity is often seen as a desirable and challenging goal for haptic robots, so it seems reasonable to understand and perhaps to imitate those properties and movements. Key mechanical properties of glabrous fingertips include highly elastic and compliant skin that is deformed by mechanical interactions with objects. A flat region on the underlying bone called an apical tuft provides the equivalent of a vernier amplifier for tiny tilt angles. Fingerprint ridges convert simple sliding movements into coherent amplification of induced vibrations. Heating the fingertip above ambient results in thermal gradients indicative of the material properties of objects. All other surfaces of human limbs are covered by hairy skin, which provides highly sensitive contact detection to trigger evasive action and a tough surface that can absorb kinetic energy until such action takes effect. Dexterity also depends as much on speedy responses as on sophisticated signal processing, so humans rely first on simple, short-latency reflexes mediated by spinal cord rather than conscious perception by the distant brain. Conscious perception in the brain requires an iterative series of decisions about what exploratory movement will most likely resolve whatever uncertainty the human operator has about an object, based on prior experience and unfolding events. We have built robotic machines with tactile sensors, reflexive feedback and exploratory algorithms that mimic most of these human strategies and thereby achieve at least a modicum of humanlike haptic function. Much remains to be done but at least we are finally on the right track. |
Rob Howe
|
Smartphones let my robot feel
Manipulation and locomotion require good contact sensing, and the robotics literature is full of great sensor designs. Nonetheless, experimental investigation of the role of sensing in integrated robot systems is almost nonexistent. This is because the cost of sensor fabrication and integration is prohibitive. Fortunately, this is changing due to the availability of integrated sensor systems for consumer products, particularly smartphones. These devices include transducers, analog-to-digital converters, and microcontrollers, packaged in standard miniature integrated circuit cases, and costing on the order of US$1 each. They provide high quality signals over industry-standard buses, enabling simple integration with robots. We describe the development of one class of such systems, namely tactile sensors based on MEMS barometers using the I2C bus. We have integrated these devices into robot fingers to detect contact events and pressure distributions, and into robot feet to measure ground contact and center of pressure location. The ease of application demonstrated here shows the potential for ubiquitous robotic use of sensors for acceleration, proximity, temperature, distance, magnetic fields, and many other physical parameters. |
Lael Odhner
|
Giving your colleagues a hand: how can the robotics community tackle problems of adoption?
On the most basic level, a disruptive technology is one that people use. Many amazing technologies for tactile sensing and manipulation are not adopted by the research community at large. The barriers to adoption in these cases are sometimes performance- or cost-related, but are just as often rooted in practical support, sales and supply chain issues. In this talk, we will explore various models for encouraging broad adoption of new technologies for manipulation, drawing on recent examples. |
Katherine Kuchenbecker
[Slides - PDF] |
Key Barriers to Haptic Intelligence in Robotics
When you step on or manipulate objects in your surroundings, you can discern each item’s physical properties from the rich array of haptic cues you experience, including both the tactile sensations arising in your skin and the kinesthetic cues originating in your muscles and joints. Anticipating surface properties such as hardness and friction enables you to plan actions that are likely to succeed, while monitoring haptic sensations lets you control the resulting interaction and quickly correct any problems that arise. In contrast, autonomous robots rarely take advantage of the sense of touch; both robotic locomotion and robotic manipulation systems would benefit from improvements in haptic intelligence. Why, then, don't all modern robots incorporate rich haptic sensing? This talk will use examples from my lab to illustrate key barriers facing the robotics research community in this domain. First, we need to think beyond the force sensor to include tactile cues. Almost everyone trying to add haptic feedback to robotic surgery has focused on force sensing. My team showed that it is far more practical and surprisingly useful to measure and feed back high-frequency instrument vibrations. We also found that the magnitude of instrument vibrations inversely correlates with surgical skill. Second, we need to dig deeper into the haptic signals that we do acquire. Tactile sensors are often simply thresholded to yield a binary contact indicator. My lab has developed more sophisticated tactile signal processing methods that enable the PR2 to gently pick up, carry, and set down unknown objects, as well as exchange high fives and fist bumps with humans. We have also characterized the vibrotactile signals (ego-vibrations) caused by actuation of the PR2's gripper, to improve contact event detection. Third, we need to enable robots to feel with their eyes, as humans do. Non-contact machine perception currently focuses on identifying shapes and recognizing object categories. In collaboration with Trevor Darrell at UC Berkeley, my lab is currently collecting a large corpus of matched visual and haptic surface interaction data. We plan to provide open-source software that will enable robots to anticipate the physical properties of ground surfaces and objects through vision alone. |
Session 4: Whole-Body Manipulation with Error Handling and Recovery
- Within human behavior, errors are more the norm than the exception. Humans are also spectacularly graceful at detecting and recovering from them. Even though errors might have different manifestations in locomotion and manipulation (slipping, tripping, stumbling, fumbling, ...) the underlying reason is often unexpected contact behavior such as losing contact, unexpected contact or frictional uncertainty. Robustness against contact uncertainty is especially relevant for whole-body manipulation, where the notions of manipulation, locomotion, and body posture are so intertwined that unexpected contact behavior is synonymous of falling down. How do robots become proficient at recovering from contact uncertainty?
Chair: Chris Atkeson
Speakers:
Jerry Pratt
|
Whole Body Humanoid Control and Push Recovery, Some Observations, and Challenges
Whole body humanoid control techniques allow for simultaneously balancing and manipulating objects and for distributing contact forces over multiple appendages. Push recovery techniques attempt to regain balance after a disturbance by controlling momentum to fight the disturbance or by modifying where a footstep is taken. We will describe an algorithm which combines momentum-based whole-body control and push recovery techniques. We will present results from Atlas, which we are using for the DARPA Robotics Challenge and on Valkyrie, which we are using for a National Robotics Initiative project. In addition to describing our current research, we will also discuss some general observations about humanoid robots and some of the current and upcoming challenges. |
Rod Grupen
|
What Divide? Locomotion and Manipulation in a Uniform Motor Control Framework
I will try to challenge the premise of the workshop, namely that there appear to be fundamental differences between locomotion and manipulation. Instead I will propose that both enterprises can be viewed uniformly as the search for a motor control sequence that establishes belief that the robot satisfies a functional specification of a task---for instance, that a misplaced object ends up in my hand. A control framework is proposed that provides a combinatorial basis for actions and states given a collection of sensor and motor resources and, thus, supports learning algorithms that depend on exploration. Hierarchies of skills can be learned efficiently in this framework. What’s more, the skills themselves blur superficial distinctions between locomotion and manipulation and lead to representations that carve up the world into “objects.” I will present some simple demonstrations of this perspective using our uBot mobile manipulator. |
Ambarish Goswami
|
Humanoid Robot Fall Control
Robots must be completely safe if they are to operate in human environments. Autonomous walking robots have a unique and serious safety issue resulting from a loss of balance and fall. Self-damage and unintentional human injury from a fall are major reasons that humanoid robots are not allowed to move freely in human environments. Balance is one of the oldest topics in humanoid robotics that continue to appeal to researchers of the present day. However, keen interest in balance research makes us overlook the consequences of a balance failure. Although fall appears to be a rare event in the life of a humanoid robot, its occurrence is virtually unavoidable, and its consequences can be disastrous. A falling robot is an underactuated system that rapidly gains speed under gravity. It is a challenging whole body motion problem where the task is a rather depressing "better" fall that is associated with a lesser injury/damage. The time to act is very short. In this respect, we will describe our work on humanoid robot fall strategy which tries to modify the robot's fall direction in order to avoid hitting a person or an object in the vicinity. We will follow up with some broader discussion points on a) What can humanoid robots learn from human fall? b) Can humanoid robots learn to fall “better” than humans?, c) Do we know clearly enough why balance is a hard problem so that we can explain it to someone outside the field? d) What is the best way to study rare catastrophic events (such as humanoid fall) that we might never want to experiment with? e) What can we learn from simulation studies of dynamic systems that are inherently very sensitive to small perturbations? |
Katie Byl
[Slides - PDF] |
Trade-offs in Limbed Mobility
In a world with ample variability and imperfect sensing capabilities, it’s hard to get robust disturbance rejection – and still be agile (or dexterous). One approach for robustness in mobility is to design a robot that doesn’t mind falling down. Another is to design a robot with a large, table-like base of support. JPL has gone down the latter route in designing RoboSimian for the DARPA Robotics Challenge. Arguably, this also places it as a tortoise among hares (in competing with humanoids) when it comes to mobility. We’ll discuss natural trade-offs between going fast and moving reliably in variable environments, and, arguably of highest importance, issues of benchmarking and metrics to quantify both robustness and agility. |
Session 5: Contact-Rich Interactions and Contact Awareness
- There is an intellectual gap between our current planning and control capabilities and what we understand or see as optimal behavior. For instance, the fastest way through a narrow passage is based on simple bump-n-go strategies, however, we do not have efficient planning or control algorithms that can exploit the richness of available contact interactions, especially in the context of dynamic interactions with an uncertain environment. Current approaches to dynamic planning and control through contact, rely on structured and precise geometric models for contact locations. For example, contact is typically assumed only at the end of the limbs - fingertips and feet, and with a known and fixed environment. How do we transition to seeing contact as a feature rather than an annoyance?
- Contact dynamics impose important priors when trying to make sense out of contact. An object resting under gravity will not jitter. Two objects in contact will not penetrate each other. An object will not gain energy unless supplied. Standard gaussian based observers such as Kalman type filters, however, are not capable of accurately reasoning about events that are inherently non-gaussian. As a result, our go-for estimators and trackers often say that objects jitter, that they penetrate each other, and that they suddenly gain energy. Is there an efficient and more accurate way to make use of contact information in support of estimation and identification?
Chair: Jeff Trinkle
Speakers:
Emo Todorov
[Slides - PDF] |
Model-based robotics, without the simplifications
Our plan for solving robotics is to do near-optimal system identification, state estimation and feedback control, all with respect to the same physics model. This model is as detailed as we can make it, in contrast with inverted pendulums and other simplifications which have worked for isolated behaviors but do not seem to offer an upgrade path. Computers have gotten fast enough to optimize numerically through the full physics. A major challenge is optimization through contacts – which make the cost landscapes difficult to navigate. To address this challenge we have developed new optimization algorithms, contact models and a custom physics simulator. They have enabled us to plan complex trajectories for dexterous manipulation, locomotion and other full-body movements within the same framework. For online execution we have relied on model-predictive control. More recently we have been able to train large neural network controllers, by combining deep-learning techniques with trajectory optimization to provide the training data. Overall, we are approaching the point where we can solve pretty much any control problem we want to solve in simulation. Achieving real-time performance can still be an issue in some cases, but Moore’s law is really good at resolving such issues. Thus we are now turning our attention to system identification and state estimation in the presence of contacts, while leveraging the tools developed in the control domain. |
Robotics, with the *right* simplifications
There is general agreement on what the goals are---both in locomotion and in manipulation. But since we don’t even have a satisfactory solution for either yet, any discussion of what the “right” solution might be must be pure speculation. So I will speculate. Do the locomotion and manipulation problems share enough characteristics for it to make sense to use the same solution? Of course, it is possible to model all of physics---and that would certainly cover nearly everything. But does it make sense to go the route of a universal solution? Or will it be easier and better to come up with solutions that exploit the characteristics and idiosyncrasies of the problem? I will argue that we should do the latter. And I will speculated about what these idiosyncrasies might be and how they can be exploited. I will be very much in favor of contact-rich interactions; I think they are key to any solution. And I will question what awareness might mean in the term “contact awareness” (as in the session title). |
Locomotion and Manipulation as Uncertain Hybrid Mechanical Systems
Much of my lab's past work on manipulation has focused on relatively simple systems: robots with only a few degrees of freedom, manipulating objects in controlled environments, by pushing, throwing, sliding, batting, rolling, etc. One reason for the simplicity is to explore the limits of what's possible with simple systems with a good mechanical model, and to develop well-grounded theory for controllability, motion planning, and feedback control. Our more recent work in locomotion has followed a similar philosophy. Recently, however, we finished construction of the ERIN manipulation system, consisting of a 7-dof robot arm, a 16-dof robot hand, four tactile fingertips, and a high-speed 3D vision system. Will the mechanics-based approach to motion planning and control provide a path forward, or are the uncertainties in a complex system like this just too large for it to be useful? Is it worth imagining a future where models, sensors, and estimators exist that give highly precise estimates of the states of manipulated objects, or of the states of dynamically locomoting robots, as well as their current and expected future contact states? In this talk I will raise more questions than I answer regarding the prospects of viewing locomotion and manipulation as uncertain hybrid mechanical systems. |
David Remy
[Slides - PDF] |
Order Matters!? – The Choice of Gait and Contact Sequence
This talk seeks to motivate the importance of an appropriate contact sequence from the perspective of locomotion in biology and robotics. Using simple models of locomotion, we show that the choice of gait and thus the choice of a footfall pattern can make a big impact on the performance of a legged system. Planning and optimization tools that do not require an a-priory definition of the footfall sequence can take advantage of this variability and lead to gaits that better exploit mechanical dynamics during locomotion. Similar tools are equally useful in the planning of manipulation tasks. To contrast this similarity with a clear difference between manipulation and locomotion, the talk uses the issue of gait also to highlight the importance of passive energy storage and return in locomotion. |
Session 6: Mechanical Intelligence vs Control Authority
- It is a recurrent trend in design to seek the exploitation of mechanical intelligence in support of robustness and efficiency in interacting with an uncertain world. Compliance, underactuation, or synergies in hand design, or passive dynamic machines in locomotion, are examples of design solutions that reduce the need for control. Reducing the need for control, however, does not imply that control becomes any easier. On the contrary, since the dynamics of an “intelligent” mechanism are more complex, control often becomes more difficult. A different approach to design is full actuation. Fully-actuated hands or legs give a greater control authority. However, if we were to have full control authority, do we know what to do with it? Do we have control algorithms capable of exploiting full control beyond reduced models such as ZMP in locomotion or eigengrasps in manipulation? Is there a formal way to approach a trade-off?
- Feet of locomoting machines aim to be resilient, strong, and yield simple and predictable contact interactions, often at the expense dexterity and adaptability (example: pointy feet). On the other hand, hands of manipulators aim to be graceful, dexterous, and functional, often at the expense of resilience and strength (example: breakaway hand). In an ideal world, where robots compete in American Ninja Warrior, we need end-effectors that combine both the necessary strength to provide good handholds/footholds and are capable of some moderate dexterity. How do we approach that trade-off? What would be a good set of goal tasks?
Chair: Andy Ruina
Speakers:
Between Passive and Active: The Balancing Act of Designing Behaviors
Building robots for legged locomotion requires a balance between active control and mechanical intelligence, also called passive dynamics. In this talk, we present progress and lessons learned from our experiences designing, building, and controlling the bipedal robot, ATRIAS. This robot is principally designed to embody a desired set of passive dynamics, while ceding significant control authority (ATRIAS has twelve degrees of freedom, but only six motors). Specifically, ATRIAS’ mechanics passively exhibit spring-mass dynamics, which we believe are a good first cut at rendering the "Most Significant Bits" of the dynamics of animal walking and running. Our preliminary results show that ATRIAS, when walking in 3D, reproduces the ground reaction forces of human walking, which is a predicted feature of spring-mass locomotion. We conclude by discussing the relative merits of mechanical intelligence and control authority in ATRIAS' behaviors, legged locomotion in general, and the broader requirements of physical-interaction tasks. |
Sangbae Kim
|
Proprioceptive impulse control: toward robust physical interaction
Disaster response often involves exploring and performing physical work in dangerous environments, while the design of most robots stemmed from manufacturing robots focuses on accurate and rapid position tracking. Many researchers have been introducing new design and control paradigms to achieve stable and robust dynamic interactions with environment. This talk will describe the characteristics of a new paradigm of robotic limb design called proprioceptive force control actuators and the implementations of impulse planning toward enhanced dynamic interaction with environments. This approach is implemented in the MIT Cheetah capable of running upto a speed of 13mph at animals locomotion efficiency (Total COT of 0.45) and jumping over an 40cm-high obstacle autonomously. |
Aaron Dollar
|
Contact and Constraints in Hands and Legged Robots
It is well-accepted that, especially in the context of multi-legged robots and fingertip-based grasping and manipulation, the mechanics of manipulation and locomotion look a lot alike. In this talk, I will discuss some of the challenges associated with reliably making contact between the robot and the external environment, especially in the presence of uncertainty. I will argue that, more often than not, robotic systems end up being over-constrained at contact and something either “breaks”, or complex and error-prone redundant control schemes must be utilized to mitigate the situation. Alternatively, I will discuss how addressing the mechanical design of hands and legged robots, especially through the implementation of underactuated mechanisms, can help to passively facilitate the challenges of contact and constraints of closed kinematic chains. |
Compliant control for grasping and balancing
Joint torque sensing allows the implementation of sensitive compliance and impedance controllers. Passivity-based impedance controllers have been traditionally applied to manipulation tasks. Experiments on grasp and reach-and-grasp will be presented in this talk, showing how the controller can cope with positional uncertainties of the object. The same framework is applied to obtain a posture whole-body controller in the humanoid robot TORO. Moreover, this controller is extended for a multi-contact scenario, thus obtaining robust balancing behaviors even with uncertainty in the supporting surface. |
Embracing the Environment with Hands and Feet
The “divide” between locomotion and manipulation is perhaps an artifact of a natural progression from operations in which interactions with the environment are few and carefully scripted, to tasks in which interations are ubiquitous and exploited. Early robotic applications typically minimize interactions: wheeled robots patrol corridors using sonar and vision; drones create flight paths that avoid obstacles; arms plan trajectories to acquire objects without touching barriers along the way. Later applications embrace interactions: feet and other appendages push into the ground, sliding as they go; hands grasp and manipulate objects with rolling and sliding; drones perch on walls, ceilings and fixtures in the environment. All of these tasks require modeling, sensing and controlling physical interactions with the environment. Typically the interactions are complex; however it is essential to model them because the associated contact forces often dominate the overall dynamics. In this context, there is little difference between climbing a rocky cliff and manipulating a rock. Among the recurrent lessons from biology is that animals are very effective at exploiting such interactions with the environment. To this end they use robust, compliant structures and end-effectors that tolerate misalignment, impacts and abrasion; they modify the attributes (e.g. friction, adhesion, fluidization) of the end-effector/environment interface directly; and they store and release energy. As robots venture out of structured environments and into the world at large they increasingly need to adopt similar strategies. |
Al Rizzi and Marc Raibert
[Slides - PDF} |
Manipulation vs Locomotion
|