A PHYSIOLOGIST'S PERSPECTIVE ON ROBOTIC EXOSKELETONS FOR HUMAN LOCOMOTION
Technological advances in robotic hardware and software have enabled powered exoskeletons to move from science fiction to the real world. The objective of this article is to emphasize two main points for future research. First, the design of future devices could be improved by exploiting biomechanical principles of animal locomotion. Two goals in exoskeleton research could particularly benefit from additional physiological perspective: 1) reduction in the metabolic energy expenditure of the user while wearing the device, and 2) minimization of the power requirements for actuating the exoskeleton. Second, a reciprocal potential exists for robotic exoskeletons to advance our understanding of human locomotor physiology. Experimental data from humans walking and running with robotic exoskeletons could provide important insight into the metabolic cost of locomotion that is impossible to gain with other methods. Given the mutual benefits of collaboration, it is imperative that engineers and physiologists work together in future studies on robotic exoskeletons for human locomotion.
Active Segmentation
The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.
RECOGNIZING BEHAVIOR IN HAND-EYE COORDINATION PATTERNS
Modeling human behavior is important for the design of robots as well as human-computer interfaces that use humanoid avatars. Constructive models have been built, but they have not captured all of the detailed structure of human behavior such as the moment-to-moment deployment and coordination of hand, head and eye gaze used in complex tasks. We show how this data from human subjects performing a task can be used to program a dynamic Bayes network (DBN) which in turn can be used to recognize new performance instances. As a specific demonstration we show that the steps in a complex activity such as sandwich making can be recognized by a DBN in real time.
