José Santos-Victor, Instituto Superior Técnico, Portugal
          Title: Bioinspired Robotics and Vision with Humanoid Robots

Alícia Casals, Institute for Bioengineering of Catalonia.IBEC and Universitat Politècnica de Catalunya.UPC, Spain
          Title: Human –Robot Cooperation Techniques in Surgery

Bradley Nelson, Robotics and Intelligent Systems at ETH-Zürich, Switzerland
          Title: Making Microrobots Move

Wisama Khalil, Ecole Centrale de Nantes, IRCCyN, France
          Title: Dynamic Modeling Robots using Recursive Newton-Euler Techniques

Oleg Gusikhin, Ford Research & Adv. Engineering, U.S.A.
          Title: Emotive Driver Advisory System

John Hollerbach, University of Utah, U.S.A.
          Title: Fingertip Force Measurement by Imaging the Fingernail

Keynote Lecture 1
Bioinspired Robotics and Vision with Humanoid Robots
José Santos-Victor
Instituto Superior Técnico

Brief Bio
José Santos-Victor received the PhD degree in Electrical and Computer Engineering in 1995 from Instituto Superior Técnico (IST - Lisbon, Portugal), in the area of Computer Vision and Robotics. He is an Associate Professor with "Aggregation" at the Department of Electrical and Computer Engineering of IST and a researcher of the Institute of Systems and Robotics (ISR) and heads the Computer and Robot Vision Lab - VisLab.
He is the scientific responsible for the participation of IST/ISR in various European and National research projects in the areas of Computer Vision and Robotics. His research interests are in the areas of Computer and Robot Vision, particularly in the relationship between visual perception and the control of action, biologically inspired vision and robotics, cognitive vision and visual controlled (land, air and underwater) mobile robots.
Prof. Santos-Victor was an Associated Editor of the IEEE Transactions on Robotics and the Journal of Robotics and Autonomous Systems.

In this talk, I will describe recent results on exploring recent results from neurophysiology and developmental psychology for the design of humanoid robot technologies. The outcome of this research is twofold: (i) using biology as an inspiration for more flexible and sophisticated robotic technologies and (ii) contribute to the understanding of human cognition by developing biologically plausible (embodied) models and systems.

One application area is the domain of video surveillance and human activity recognition. We will see how recent findings in neurophysiology (the discovery of the mirror neurons) suggest that both action understanding and execution are performed by the same brain circuitry. This might explain how humans can so easily (apparently) understand the actions of other individuals, which constitutes the building block of non-verbal communication first and then, language acquisition and social learning.

The second aspect to be addressed is the use of development as a methodological approach for building complex humanoid robots. This line of research is inspired after the human cognitive and motor development, a pathway that allows newborns to progressively acquire new skills and develop new learning strategies. In engineering terms, this may be a way not only to structure the sensed data but also to master the complexity of the interaction with the physical world with a sophisticated body (sensing and actuation).

During the talk, I will provide examples with several humanoid platforms used for this research: Baltazar is a humanoid torso we developed to study sensorimotor coordination and cognition; the latest results are implemented in the iCub humanoid robot, for which we designed the head, face and body covers as well as the attention and affordance learning system.


Keynote Lecture 2
Human – Robot Cooperation Techniques in Surgery
Alícia Casals
Institute for Bioengineering of Catalonia.IBEC and Universitat Politècnica de Catalunya.UPC

Brief Bio
Alicia Casals is professor at the Technical University of Catalonia (UPC), in the Automatic Control and Computer Engineering Department. She is currently leading the research group on Robotics and Medical imaging Program of the Institute for Biomedical Engineering of Catalonia, and is member of the research group GRINS: Intelligent Robotics and Systems at UPC. The research is oriented to improve human robot interaction through multimodal perception, focused mainly in the area of medical robotics. In this field she is working both in rehabilitation, assistance and surgical applications. Her background is in Electrical and Electronic Engineering and PhD in Computer Vision. From 2001 to 2008 she was the coordinator of the Education and Training key area within Euron, the Network of Excellence: European Robotics Network, and RAS Vice President for Membership in the period 2008-2009.
From the developed research projects she won different awards, Award to a social invention (Mundo Electrónico), International Award Barcelona’92 (Barcelona City_Hall), Ciutat de Barcelona Award 1998 (Barcelona City_Hall), and Narcis Monturiol Medal from the Catalan Government as recognition of the research trajectory 1999. From 2007 Prof. Casals is member of the Institut d’Estudis Catalans, the Academy of Catalonia.

The growth of robotics in the surgical field is consequence of the progress in all its related areas, as: perception, instrumentation, actuators, materials, computers, and so. However, the lack of intelligence of current robots makes teleoperation an essential means for robotizing the Operating Room (OR), helping in the improvement of surgical procedures and making the best of the human-robot couple, as it already happens in other robotic application fields. The assistance a teleoperated system can provide is the result of those control strategies that can combine the highest performance computers have in many aspects with the surgeon knowledge, expertise and will. In this lecture, an overview of teleoperation techniques and operating modes suitable in the OR will be presented, considering different cooperation levels. A special emphasis will be put on the selection of the most adequate interfaces currently available, able to operate in such quite special environments.


Keynote Lecture 3
Making Microrobots Move
Bradley Nelson
Robotics and Intelligent Systems at ETH-Zürich

Brief Bio
Brad Nelson is the Professor of Robotics and Intelligent Systems at ETH Zürich. His primary research focus is on microrobotics and nanorobotics with an emphasis on applications in biology and medicine. He received a B.S.M.E. from the University of Illinois at Urbana-Champaign and an M.S.M.E. from the University of Minnesota. He has worked as an engineer at Honeywell and Motorola and served as a United States Peace Corps Volunteer in Botswana, Africa, before obtaining a Ph.D. in Robotics from Carnegie Mellon University in 1995. He was an Assistant Professor at the University of Illinois at Chicago (1995-1998) and an Associate Professor at the University of Minnesota (1998-2002). He became a Full Professor at ETH Zürich in 2002.
Prof. Nelson has been awarded a McKnight Land-Grant Professorship and is a recipient of the Office of Naval Research Young Investigator Award, the National Science Foundation Faculty Early Career Development (CAREER) Award, the McKnight Presidential Fellows Award, and the Bronze Tablet. He was elected as a Robotics and Automation Society Distinguished Lecturer in 2003 and 2008 and won Best Paper Awards at major robotics conferences and journals in 2004, 2005, 2006, 2007, 2008 and 2009. He was named to the 2005 “Scientific American 50,” Scientific American magazine’s annual list recognizing fifty outstanding acts of leadership in science and technology from the past year for his efforts in nanotube manufacturing. His laboratory won the 2007 and 2009 RoboCup Nanogram Competition, both times the event has been held. He serves on the editorial boards of several journals, has served as the head of the Department of Mechanical and Process Engineering from 2005–2007, and is currently the Chairman of the ETH Electron Microscopy Center (EMEZ).

Microrobotics has recently entered the phase in which sub-mm sized autonomous robots are being realized. While the potential impact of these devices on society is high, particularly for biomedical applications, many challenges remain in developing genuine microrobots that will be useful to society. This talk will focus on approaches to the locomotion of microrobots in liquid and on solid surfaces. Issues in the design of external systems for providing energy and control of microrobots must be considered, and the use of externally generated magnetic fields in particular appears to be a promising strategy. Theoretical and experimental issues will be discussed, functionalization of the devices, and efforts to scale microrobots to the nanodomain will be presented.


Keynote Lecture 4
Dynamic Modeling Robots using Recursive Newton-Euler Techniques
Wisama Khalil
Ecole Centrale de Nantes, IRCCyN

Brief Bio
Wisama Khalil received the Ph.D. and the “Doctorat d’Etat” degrees in robotics and control engineering from the University of Montpellier, France, in 1976 and 1978, respectively. Since 1983, he has been a Professor at the Automatic Control and Robotics Department, Ecole Centrale de Nantes, France. He is the coordinator of Erasmus Mundus master course EMARO “European Master in Advanced Robotics”. He is carrying out his research within the Robotics team, Institut de Recherche en Communications et Cybernétique de Nantes (IRCCyN). His current research interests include modeling, control, and identification of robots. He has more than 100 publictions in journals and international conferences.

In the keynote the author will present the use of recursive Newton-Euler to model different robotics structures. The main advantages are the use of numerical or symbolic programming to develop these models with reduced number of operations. At first the use of the method to generate the inverse and direct dynamic models of rigide tree structures systems will be presented. Then this method will be generalized for other structures such as closed loop robots, and parallel robots. The application for articulated robots with moving base such as Eel like robots will be presented using three recursive calculations. At the end the case of wheeled mobile robots with point contact and rigid wheels will be treated to end with the modeling of cars with flexibles tyres.


Keynote Lecture 5
Emotive Driver Advisory System
Oleg Gusikhin
Ford Research & Adv. Engineering

Brief Bio
Dr. Oleg Gusikhin is a Technical Leader at Ford Manufacturing, Vehicle Design and Safety Research Laboratory. He received his Ph.D. from the St. Petersburg Institute of Informatics and Automation of Russian Academy of Sciences and an MBA from the Ross Business School at the University of Michigan. For over 15 years, he has been working at Ford Motor Company in different functional areas including Information Technology, Advanced Electronics Manufacturing, and Research & Advanced Engineering. During his tenure at Ford, Dr. Gusikhin has been involved in the design and implementation of advanced information technology and intelligent controls for manufacturing and vehicle systems. Dr. Gusikhin is a recipient of 2004 Henry Ford Technology Award and two Ford Research and Advanced Engineering Technical Achievement Awards. He holds 2 patens and is a co-author of 8 patent applications on advanced vehicle infotainment technology.

The Emotive Driver Advisory System (EDAS) is a Ford Research project exploiting advances in information technology and consumer electronics to enhance the driver's experience. EDAS was inspired by recent developments in affective computing, open mic grammar-based speech recognition, embodied conversational agents, and humanoid robotics focusing on personalization and context-aware adaptive and intelligent behavior. The EDAS concept was revealed at the 2009 Consumer Electronics Show and the 2009 North American International Auto Show as EVA, Emotive Voice Activation.
The core elements of EDAS include an emotive and natural spoken dialogue system and an AVATAR-based visual interface integrated with adaptive vehicle controls and cloud-based infotainment. The system connects the vehicle, the driver, and the environment, while providing the dialogue strategy best suited for the given driving context and emotive status of the driver. Furthermore, the system leverages cloud-based infotainment, allowing for personalized, context-aware and interactive delivery of infotainment services. Specifically, we demonstrate how EDAS enhances four most common in-vehicle infotainment activities: points of interest, news radio, music and refueling notification and advice.


Keynote Lecture 6
Fingertip Force Measurement by Imaging the Fingernail
John Hollerbach
University of Utah

Brief Bio
John M. Hollerbach is Professor of Computing, and Research Professor of Mechanical Engineering, at the University of Utah. He also directs the Robotics Track, a joint graduate program between the School of Computing and Department of Mechanical Engineering. From 1989-1994 he was the Natural Sciences and Engineering/Canadian Institute for Advanced Research Professor of Robotics at McGill University, jointly in the Departments of Mechanical Engineering and Biomedical Engineering. From 1982-1989 he was on the faculty of the Department of Brain and Cognitive Sciences and a member of the Artificial Intelligence Laboratory at MIT; from 1978-1982 he was a Research Scientist. He received his BS in chemistry ('68) and MS in mathematics ('69) from the University of Michigan, and SM ('75) and PhD ('78) from MIT in Computer Science. He is presently the Vice President for Technical Activities of the IEEE Robotics and Automation Society, and Editor of the International Journal of Robotics Research.

Shear and normal forces from fingertip contact with a surface are measured by external camera images of the fingernail. Due to mechanical interaction between the surface, fingertip bone, and fingernail, regions of tension or compression are set up that result in reddening or whitening due to blood flow. The effect is quantitative enough to serve as a transducer of fingertip force. Due to individual differences, calibration is required for the highest accuracy. Automated calibration is achieved by use of a magnetically levitated haptic interface probe.


Page updated on 29/06/10       Copyright © INSTICC