24 Proprio and Teleoperation of a Robotic System for Disabled Persons Assistance in Domestic Environments

24 Proprio and Teleoperation of a Robotic System for Disabled Persons’ Assistance in Domestic Environments Carlos Balaguer1 , Antonio Gim´enez1 , Albe...
Author: Calvin Burke
3 downloads 1 Views 704KB Size
24 Proprio and Teleoperation of a Robotic System for Disabled Persons’ Assistance in Domestic Environments Carlos Balaguer1 , Antonio Gim´enez1 , Alberto Jard´on1, Ra´ul Correal1 , Santiago Mart´ınez1 , Angelo M. Sabatini2 , and Vincenzo Genovese2 1

2

University Carlos III de Madrid Depto. de Ingenier´ıa de Sistemas y Autom´atica Avenida de la Universidad, 30. 28911 (Legan´es) Madrid, Spain {balaguer,agimenez,ajardon,rcorreal,smartinez}@ing.uc3m.es Scuola Superiore SantAnna ARTS Lab Piazza Martiri della Libert´a, Pisa 56127, Italy [email protected], [email protected]

Summary. The chapter describes a teleoperation system for assisting disabled and elderly people in their lives and work environments. The developed system ASIBOT (assistive robot) is part of the EU 5th FP project MATS (IST 2001-32080). The goal of the project is to come up with a new concept in teleoperated robotics systems that would help people in their daily domestic activities such as eating, drinking, shaving,grooming or simply retrieving objects from shelves or from the floor, etc. The best feature of the ASIBOT system is represented by a snake-like robot arm that is capable of moving around serially from a wall-mounted or table- mounted docking stations. The robot is also able to ”jump” to or from a wheelchair. One of the important properties of proprio and teleoperation of assistive robots is the fact that the operator could be at the same time the patient,the user or the target of the task.Due to disability, the operator has limited capability on controlling the master and also considerable delays in closing the teleoperation loop. Depending on the level of the operators disability, different types of HMI are needed to be used. Some HMI are commanded by voice,simple switches or a joystick.

24.1 Introduction Teleoperated robotics system is commonly formed by two different scenarios: the operator site where the master and the human operator are located, and the remote site where the robot, performs the remote task. It clearly shows that the human is ”isolated” from the working environment and is to be safe at every moment. The present chapter introduces a new concept of robotics teleoperation, called proprio & teleoperation, where sometimes both areas, the operator and remote environment are the same, but not at all times. The human operator teleoperates the robot whose working environment includes himself or herself. Humans in general could not be safe in the master environment as such area could be at the same time the remote environment. Human factors are important not only for the teleoperation itself but also for safety reasons. The present work also describes the teleoperation architecture of the ASIBOT M. Ferre et al. (Eds.): Advances in Telerobotics, STAR 31, pp. 415–427, 2007. c Springer-Verlag Berlin Heidelberg 2007 springerlink.com 

416

C. Balaguer et al.

system where two different control loops (internal and external feedbacks) automatically adjust their role in an overall control strategy. This adjustment depends on the level of motion impairment from the operator/patient. Finally, the chapter presents the experimental results of the ASIBOT robot applications in serving people in their domestic environment. The main conclusions and operator/patient preferences are also presented.

24.2 Proprio and Teleoperation Control Architecture Lots of data are sent between the user and the workspace of a slave robot in most teleoperated systems. [1, 2]. Devices such as joysticks, keypads, haptic devices, etc. are used for sending different commands from the user to a computer. Consequently, communication between these two computers comes with delay due to distance and/or complexity in calculation. These problems are of special relevance during teleoperation, which has generated a lot of advancement in the study of this problem [3, 4, 5], and presented here in previous chapters. In a teleoperated system, slave environments have different sensors, such as stereoscopic cameras, microphones, etc. and the slaves also carry force/torque sensors to determine force reflection. All these elements allow users to better know their workspace and obtain good telepresence. The user receives information, forces, images, and audio from the slave environment through a communication link. All of these data are received on displays, speakers, etc. Up to now, previous teleoperated systems have been modelled with high degree of accuracy. There are different proposed solutions depending on the slave workenvironment, sensorial system, and distance between the master and the slave. Most of these applications have been teleoperated by skilled people who know the system and are usually experienced in such tasks. In an assistive system, users do not have same skills as described above. So, not all classic teleoperation devices are appropriate for disabled people. It is necessary to develop new tools, or different HMI appropriate to the users level of disability. Depending on their disability, teleoperators will have a slower time response in most cases. This means that a delay can arise due to users behaviour, and not due to a communication delay between the master and slave. This fact requires changes in the value of transference function of the user, proposed by McRuer [20, 6]. Furthermore, the operators ability to sense reacted forces reflected from the slave in the telecontrol operation mechanism is lessened. The proposed system as shown in Fig. 24.1 is a special teleoperated system since the user is its slave workspace and its target for carrying out the required task. It is the environment where the robot must work. Sometimes, it is not necessary to use cameras or other sensors to provide telepresence for the user. As the user is located at the centre of the task, the robot can be clearly seen and thereby estimate its distance. Due to such factors,the functionality of the teleoperated system has to be changed in this type of applications. The robot must be autonomous for certain tasks (Fig. 24.2, as the system cannot wait for any response from the user. Several tasks such as eating or drinking are better performed when they are pre-programmed. It will allow a

24

Proprio and Teleoperation of a Robotic System

417

Slave control Task teleoperation

Operator/ User

Master loop (slow time response)

Master control

Fig. 24.1. Proprio & teleoperation scheme

Fig. 24.2. Teleoperation

non-skilled user to simply push a button, move a stick, or send voice command considering the adapted users HMI, i.e. the user pushes the meal button, and then the robot fills the spoon and gives the food to the user. In such manner, it is not necessary for the user to guide the robot directly. In other applications (gaming) it is more suitable for the user to command the robot directly. In such case, the teleoperation is mostly discontinuous in order to reach a high safety level and avoid time delays. 24.2.1

Teleoperation Control Strategy

Teleoperated systems have a strong control loop between the remote and operator sites. The user can command all slave movements and trajectories with main priority in the control loops of the remote and operator sites. Both control loops try to get good telepresence from the operator. The teleoperation architecture of the ASIBOT system has two different control loops (master and slave feedbacks) which automatically adjust their role in the overall control strategy. This adjustment depends on the level of operator/patients motion impairment, such as shown in figure 24.1. In such kind of applications, it is not necessary to send data from the remote site to the operator site, because the non-skilled user is located at the task area. In this case the patient is involved in the both control loops, master and slave. In the master loop, there is a slow time response, because the user does not have the same dexterity as of an operator in a classical teleoperated system. In the slave loop, as the robot has to work next to the user/patient, this control loop is designed so as not to hurt him/her. Patients with minor degree of disability can use a control strategy very similar to classical systems. When the degree of users disability is higher, the ASIBOT system increases the priority of the slave loop. If the robot finds a flaw after a command has been sent, the user may have no time to abort the order, however, the robot is still able to control its safety level. Safety and reliability are particularly important in applications where user’s welfare is involved and especially in motion-impaired or cognitive disabled users. Since the system’s full safety and precision is unfeasible, [7, 8], it is thereby required to establish the maximum cost over risk acceptance ratio.

418

C. Balaguer et al.

The design of this kind of assistive systems must come with redundancy procedures that assure no user control is missed by the user, even in cases of sub-system failure. Appropriate design in the control interface allows considerable safety strategies. Moreover, ”error recovery”, which is the ability to handle commands sent by the user that could affect the welfare and security of the system or the user is also taken into account. The system must ”forgive” and allow the user to retract the selected command with minimal penalty in time loss and system interaction.

24.3 Control Architecture for Disabled People Fig. 24.3 shows the overall control architecture of the ASIBOT system. Three different levels of computational tasks are considered, and are implemented in the following subsystems: (i) The Human-Machine-Interface (HMI) (ii) The Room Controller (RC) (iii) The Arm Controller (AC)

Room Controller

Localization system

Wireless link Wireless link

Wireless link

HMI

Docking station

Wall

Arm Controller

Docking station

User Activity transducer Environment

Wheelchair

Fig. 24.3. The control architecture of the ASIBOT robot

The HMI is the device available to the user: a) to command the arm’s working; b) to be informed about the state of the device or the task that the arm is involved in; c) to benefit from navigational feedback offered during transfer maneuvers from wallmounted docking station to the wheelchair docking station and vice-versa; d) to get access to standard application software, including Internet browser and e-mailer. The RC is a computer whose main functions are: a) to perform path-planning activities, so that the arm can be moved to its optimum within the network of available docking stations from a given starting point to the specified target configuration; b) to select and send the list of motion commands needed by the AC to move the robot arm; c) to perform wheelchair localisation via a monocular vision-based system built around

24

Proprio and Teleoperation of a Robotic System

419

web-cam image sensors (sensor-based assistance to docking for management of arm transfer procedures) [21]. The AC is embedded within the robot arm structure. The main functions of the AC are: a) Communication protocol (to interact with the client, HMI) Commands interpreter, b) Kinematics transformations (Direct kinematics and Inverse kinematics), c) Path-planning (for straight line movements), d) Connection to the amplifiers, and e) Commands to digital inputs and outputs, (i.e.: open and close the grippers). Peculiar to the ASIBOT-approach is the consideration of different interaction situations between the disabled person and the robot: (i) Proprio & telerobotics, the robot and the user are in close spatial relationships, e.g., when the user eats or drinks. (ii) Telerobotics, the robot is controlled by the user, however the robot and the user are in different spatial locations, e.g., the system is selecting the specific tool needed for the function to be accomplished: electric razor, tooth-brush, etc. (iii) Autonomous robotics, the robot might or might not be in close spatial proximity, however no interaction between the user and the robot is required - the arm moves in an autonomous way through different points in the network of docking stations. These different interaction situations combine spatial relationships, situational context, and modality of interaction which, in using the robot system, can change over time. The bridge between the different interaction situations can be built by providing users with a mobile control and command platform. 24.3.1

Human Factors of Disabled Operators

Human factors are the main requirements of the design of the operator-site, especially in this application. The key part in the control architecture of any assistive robot is the usability of the HMI, because overall performance is HMI dependent. Interaction devices address several mutually exclusive design trade-offs and complications. Users, by nature of their potential benefit from an assistive robotic device, are also very limited in the manner in which they are able to interact with the device. Simultaneously, device specification is variable. However, direct control is good in avoiding uncertainty but task execution is tedious. Executing a pre-programmed task is much faster, yet such systems cannot meet some of the user’s requirements and the effort required to program a task has been criticized. A need has been perceived for a non-technically oriented person to be provided with easy tools for performing or programming tasks [9]. The conflicting constraints are to maximize flexibility while minimizing the length of time that requires to perform a task and minimizing the cognitive load placed on the user [10]. In order to design an interface for an assistive robot which allows the user to be ’in the loop’ as the main part of the interaction architecture, the ASIBOT robot takes into account the following considerations: - The HMI device must be portable, and preferably wearable. This defines some physical characteristics of input device including size, range of motion and strength required for activation, and whether the device is a joystick, single switch, or other mode of input. The interface must be updatable and expandable, in order to easily add new

420

C. Balaguer et al.

devices which allow adaptation of the overall system to the progressive degeneration of the user’s residual capabilities. A PDA is a suitable device to obtain these requirements. - Flexibility and connectivity are needed in order to perform communication with the robot and the environment. It’s necessary to pay special attention to issues such as how different users are when using the interface, and how the interface fits into the each user’s environment. - High degree of usability is required. Non-skilled users and the cognitive handicapped must be able to use all the functions of the systems without much effort or heavy mental load. - The interface system must reduce the mental load on the user, showing only real relevant information and performing adequate sensor data fusion to free the user from doing such. This allows the user to concentrate on the problems related to task execution and not on handling the interface itself. - The interface and control architecture must allow modality of interaction in execution time (related to the degree of autonomy) to be changed The HMI has to allow scalability in the implication of the user inside the control loop. In such manner, the user decides how to use the robot, by direct control or acts as an observer while the system performs an automated task like robot connector transference. A thorough analysis of several HMI techniques can be found in the literature. [11]. Nevertheless, table 24.1 shows a list of interface devices vs. kinds of disability, from different motion impaired levels and residuals. Each column shows a group of target users and the rows show the usability of several kinds of interface devices. Residual capacities are ordered from left to right in ascendant order of disability, from those users that are able to move lower limbs, to those with high degree of motion impairment. The second column refers to the output format of the device actuated by the user. The ’C’ represents a command type output, generated by the software running in the PC or PDA, and in general for any mechatronic device able to communicate with the robot. The letter ’O’ refers to any simple device like switches, licorns, push button, physically connected to a control unit like a PDA or other complex system, and associated to a screen or voice menu that allows selecting the desired action by the user. This is a popular system to interface severely disabled people, [14, 12]. The letter ’P’ mentions all analogue transductor-based devices, like joysticks, activated by one hand, chin, back of the neck, foot, etc. in which a proportional control requires dextrous control of the related movement. User response analysis and characterization provide the basis for defining the architecture and behaviour of any assistive system. One of the simplest and most straightforward user modelling method is the Model Human Processor (MHP). It is based on the segmentation of the user response time in three aspects that are independent among them: first, the time in perceiving an event; second, the time for processing information and deciding upon a course of responsive action; and, finally, the time to perform the appropriate response. Consequently, total response time to stimuli can be described by (24.1): (24.1) Total time = A · τp + B · τc + C · τm where A, B and C are integers and τp , τc y τm correspond to the times for single occurrences of the perceptual, cognitive and motor functions [13].

24

Proprio and Teleoperation of a Robotic System

421

Table 24.1. Interface vs. disability classification; A: Output Type; B: No legs mobility; C: No upper and lower limbs; D: No head, neck or feet mobility; E: Totally motion impaired only vision, hearing, and voice; F: No voice or very difficult vocalization only vision, and hearing % A B C D PC, keyboard, and mouse C Yes No No PDA + pointer C Yes No No PDA + tactile screen C Yes No No Joysticks hand activated, Space Mouse 3D P/C Yes No No Tactile input/ haptic output devices P/C Yes No No Single switch handled screen interfaces O Yes Yes No Gesture recognition, head, shoulder or hand move- C Yes Yes Yes ments Head/shoulders activated Joysticks P/C No Yes No EMG, Eyes or gaze tracking P/C Yes Yes Yes EEG-BCI C Yes Yes Yes Face recognition, facial command generation C Yes Yes Yes

E No No No No No No No

F No No No No No No No

No Yes Yes Yes

No Yes Yes Yes

Although this is a very simple model, it was selected because it is very easy to understand and to observe deviations from predicted behaviour. Other articles related to the analysis of motion-impaired users and comparatives with able-bodied users are e.g. [17]. 24.3.2

ASIBOT HMI

Every group of users have different characteristics, abilities and possibilities. However, most of them have been considered with mobility problems and restricted to a wheelchair. The device that has been chosen to serve as a user interface is a PDA (Pocket PC) due to several reasons. One reason is its small size and weight. It becomes more portable, which can be carried easily by any user or be attached to a wheelchair and visible to the user. It also consumes very little power. Another characteristic is its versatility and ease of use. There is a screen on the front of the device, which offers tactile use as will be later on explained. Different ways using the PDA for controlling the robot have been developed. These possibilities are: tactile screen using a pointer or a finger, a scanning system, a button for option selection, a joystick connection and a voice recognition system. Tactile: Users can choose most suitable interface depending on the users ability to control the robot.For users that could move their hands or a hand, a graphic interface using a PDA in its typical manner, which is by a pointer to select different options from a screen similar to a conventional PC program. Control application has been designed to be used with ease. It lets an unskilled user to adequately carry out desired task and operate the robot. The graphic interface is based on windows. The goal is to keep it easy because of its similarity to standard programs that use typical window selection, buttons or text boxes. If the user cannot move his/her hands but can move a finger, the application can

422

C. Balaguer et al.

be controlled in the same manner, due to the screen’s tactile feature. The buttons can therefore be pressed by using the finger to point it to the desired option on the screen. Scan system: The goal of this system is to ease the way a user selects among different options offered by an HMI through its graphic interface. It is done by rotationally highlighting different menus and possible choices on the screen. It means that option 1 is highlighted for a number of seconds, then option 2, later option 3, etc. This allows users to select a highlighted option by simply pressing a button that has been installed on the wheelchair or connected to the PDA. This method has been planned for most severe disabled users. Joystick: A joystick that was specially designed for this project (Fig. 24.4), it has been developed to control the robot faster and is connected for multiple purpose uses. One of its uses is to move from one application to another or from one screen to the other by simply using the joystick as a pointer in selecting desired options from a screen or a menu.

Joint 3

Joint 2 Joint 5 Joint 1 A

Joint 4

B

Docking Station Fig. 24.4. Joystick-activated HMI prototype

Fig. 24.5. ASIBOT robot design

Another purpose is to move the robot using this device. It allows users to move the joystick in one direction so as to direct the robot to the desired direction. When the joystick comes back to rest position, the robot stops. It also has the option of changing the speed of the robot depending on the pressure users exert on the handle. This lets the user control the system using his/her hands, achieving a more realistic sense of moving the robot. This joystick could be the one the wheelchairs joystick, so making it unnecessary to have two different joysticks. By pressing a button, the function of this joystick could be switched from wheelchair control to robot control. Voice recognition: Unfortunately, there are users who are unable to move their arms, hands, fingers or neck. Those users are not able to control the robot using any of the methods described previously. Considering such, another way of control has been developed by using a voice recognition system . This system is connected to the PDA so as to listen for the user’s orders.

24

Proprio and Teleoperation of a Robotic System

423

A wireless headphone with a microphone can be used and connected to the PDA via Bluetooth; this allows the user to speak anywhere through a wireless connection in the room without having the PDA in front of him/her. 24.3.3

Assistive Robot ASIBOT

The ASIBOT robot has five degrees of freedom, and is divided into two parts: the tips, which have a docking mechanism (DS) to connect the robot to the wall, or a wheelchair, and a gripper. The body has two links that contain the electronic equipment and the control unit of the arm. In this manner, the robot is self-constrained, being portable with overall weight of 11 Kg. It is important to note that the robot is symmetric, and due to this, it is possible to attach the arm at any of its ends. It is made of aluminium and carbon fiber. The actuators are torque DC motors, and the gears are flat HarmonicDrive. Power supply is taken from the connector that is placed in the centre of the docking station. The range and position of the different joints can be seen in Fig. 24.5. ASIBOT is designed to be modular and capable of fitting into any environment. This means that the robot can move accurately and reliably in between rooms and up or downstairs. It can be transfered from/to a wheelchair [9]. For this purpose the environment is equipped with serial docking stations which make the transition of the robot from one to another possible. This degree of flexibility has significant implications for the care of disabled and elderly people with special needs. Modularity makes the system able to grow as the users degree of disability changes.

Bed-room Bath-room

5.2

Fixed DS

5.X

Wheelchair DS Mobile DS

Dining-room

Kitchen

5.4

5.1

5.3.1 5.3.2 5.3.3

Fig. 24.6. ASIBOT robot in domestic environment and types of DS

There are three different kinds of DS (Fig. 24.6): • Fixed DS. These kinds of mechanisms are fixed to a walls wherever needed to perform special tasks. It could be fixed on a table for putting plates into the dishwasher. • Mobile DS. When the robot is required to move a long distance between two DS, moving at high speed is best desired. It can be done by moving on a rail attached to the wall or the table.

424

C. Balaguer et al.

• DS inside the wheelchair. It is a special DS, located inside the wheelchair. There is a special DS in the room which allows the transition between the room DS and wheelchair.

24.4 ASIBOT Robot Applications Main applications of the robot are focused on domestic tasks. A high degree of precision during these motions is not necessary, unless upon moving between two DS. During the design process, it was decided that eating and shaving tasks the only action that the robot must do, is to move the spoon, the shaver, or the toothbrush to the user. Fig. 24.7 shows several working environments where ASIBOT robot operates in performing several domestic tasks, such as shaving and drinking. Likewise, it also shows the control strategy being explained. During these tasks, control, speed and accelaration of the different trajectories for the arm are very important. It is such due to the proximity of the user to the robot being operated. If the robot is moving a spoon with food, it will be crucial to control the orientation of its outermost part in order to avoid dropping the food.

Fig. 24.7. The ASIBOT robot connected to two different fixed docking stations during shaving and drinking tasks

24.5 Experimental Results Directly seeking out disabled user’s opinion about domestic and workplace application motivated the users trials. Our intention was to focus on the detection of acceptance level, identity of prejudice and fear, as well as uncovered needs and expectations. The protocol followed was applied in two different scenarios. First, doing live demonstrations in the laboratory with users from rehabilitation centres, and second, performing demonstration via teleconference with patientslocated remotely from users who control the robots. In both cases,the demonstration has been divided into two stages: Six scenarios or tasks assisted by the ASIBOT have been selected for users evaluation: eating, drinking, shaving, applying make-up (Fig. 24.8, picking up and placing objects, and arm-operating from a wheelchair. A brief explanatory report of the

24

Proprio and Teleoperation of a Robotic System

425

Fig. 24.8. Making-up task assistance by ASIBOT

system was given to the user. Information was collected by an examiner via an open and close-ended questionnaire. Results obtained had to be correlated with the nature of the user’s pathologies, culture, residual motor abilities, etc. An exhaustive explanation for the groups of users tested is beyond the scope of this chapter. 24.5.1

Performance of the Proprio and Teleoperation

After gathering data and its analysis, some results have been obtained. Among them are user contributions on how to improve its functions. However some proposals are contradictory and others seem to be closer to fiction than reality. For example, size reduction and at the same time increasing the distance between dockings stations. The following are the main reasons for its little acceptance: too big, lack of use, risk of isolation, reduction of communication, bad appearance, frightening, too slow, etc. Overall the subject group responded positively to the demonstration. They felt that the robot could constitute a welcome change to their lives. Of the additional comments received 89% were positive. It was a concern that not being able to actually use the robot would mean that the subject would have difficulty relating the robot to their real, everyday situation. This does not appear to be the case because while some of them were not able to relate its use to their situation, majority felt that they could. When asked to express free ideas, the most popular tasks identified were food preparation, household tasks, and grasping high and low objects. Slightly more than half the subjects felt that the robot would have an effect on the level of the needed care/help. Only 9.5% felt negatively about this effect.

426

C. Balaguer et al.

The size of the robot was thought to be the most significant factor. Further work is needed in order to understand exactly how changes in physical size would influence this. Time constraints for the final user evaluation have resulted in the condition profile of the subject sample being biased towards spinal injury (75% spinal injury). This population is more likely to be driven towards greater levels of independence. This could account for the relatively large number who felt that a reduction in care levels was positive. This in itself is an important result, but more work is needed before generalisation across a wider spectrum of conditions is possible. The most positive tasks (ranked interesting or above) were: wheelchair transfer / gripping and releasing objects - over 75%, drinking - 65%, the largest area thought to be definitely not of interest was eating (approx 30%). Physical size and speed of movement of the robot are likely to have had an effect on this result. This illustrates the complex nature of evaluating this type of equipment and points towards the importance of a more experiential evaluation than has been possible at this time [18]. There was significant support for some measure of direct control of the robot (as well as with pre-programmed), the use of a joystick / chin control was the most popular. The remote subjects were not able to perceive any possible difficulties of directing the end effector in 3 dimensional space from a two dimensional system such as a joystick.

24.6 Conclusions The above features of this robot contribute to robotics research by adding a new concept of robotics teleoperation, proprio & teleoperation, which describes a new scenario where both areas, master and remote, are the same. The human operator teleoperates the robot whose working environment includes him- or herself. Humans in general do not feel safe in the master area as it has been the same to the remote environment. In this sense, the chapter has described the use of this new concept, a proprio & teleoperation system for assistance to disabled and elderly people in their lives and work environments. This robot helps people in their daily domestic activities such as eating, drinking, shaving, applying make-up, toothbrushing, retrieving an object from a shelf or from the floor, etc. Depending on the degree of users disability, different types of HMI have been presented. Human factors are not only important for the teleoperation itself but also are crucial for safety. The actual tests have demonstrated the feasibility of the system. During the initial trials, it was highly accepted by the users.

References 1. M. Buss and G. Schmidt. Multi-modal telepresence. In Advances in control, Hightlights of the 5th European Control Conference ECC’, 1999. 2. T.B. Sherindan. Telerobotics and Human Supervisory Control. The MIT Press, 1992. 3. P. Arcara and C. Melchori. Control Schemes for Teleoperation with Time Delay: A Comparative Study. Robotics and Autonomous Systems, Vol. 38, 2002.

24

Proprio and Teleoperation of a Robotic System

427

4. C. E. Garcia, R. Carelli, J. F. Postigo, and B. Morales. Time Delay Compensation Control Structure for a Robotic Teleoperation system. In Proceeding 4th IFAC International Symposium on Intelligent Components and Instruments for Control Applications., 2000. 5. P. Arcara and C. Melchori. Control Schemes for Teleoperation with Time Delay: A Comparative Study. Robotics and Autonomous Systems, Vol. 38, 2002. 6. D. McRuer. Human Dynamics in Man-Machine Systems Automatica, Vol. 16, 1980. 7. W. Harwin and T. Rahman. Safe software in rehabilitation mechatronic and robotics design In RESNA 15th Annual Conference, pages 100–102, 1992. 8. H.F.M. Van Der Loos, D.S. Lees, and L.J. Leifer. Safety considerations for rehabilitative and human service robot systems In RESNA 15th Annual Conference, pages 322–324, 1992. 9. A. Gimnez, A. Jardn, R. Correal, R. Cabas, and C. Balaguer. A portable light-weight climbing robot for personal assistance applications In 8th International Conference on Climbing and Walking Robots (Clawar’05) 2005. 10. C. Balaguer, A. Gim´enez, and A. Jard´on. Climbing Robots Mobility for Inspection and Maintenance of 3D complex Environments. Autonomous Robots, Vol. 18. No. 3. pages 157–169. 11. R. Rammoun, J.M. Dtrich, and F. Lauture. The new MASTER man-machine interface In International Conference on Rehabilitation Robotics, 1994. 12. Z. Han, H. Jiang, P. Scucces, S. Robidoux, and Y. Sun. PowerScan: a Single-Switch Environmental Control System for Persons with Disabilities In Proceedings of the IEEE Bioengineering Conference 8-9th, pages 171–172. 13. M.J. Topping, H. Helmut, and G. Bolsmjo. An Overview Of the BIOMED 2 RAIL Robotic Aid to Independent Living Project. In International Conference on Rehabilitation Robotics ICORR’97, pages 23–26. Vol. 37 No. 5, pages 591–598, 1997. 14. A. Craig, Y. Tran, P. McIsaac, and P. Boord. The efficacy and benefits of environmental control systems for the severely disabled. Med Sci Monit11(1): RA32-39,PMID: 15614204, 2004. 15. J. Angelo. Factors affecting the use of a single switch with assistive technology devices, Journal or Rehabilitation. Research and development, Vol. 37 No. 5, pages 591–598, 2000. 16. S.K. Card, T.P. Moran, and A. Newell. The Psychology of Human-Computer Interaction. Lawrence Erlbaum Associates, 1983. 17. S. Keates, P.J. Clarkson, and P. Robinson. Investigating the applicability of user models for motion-impaired users In Proceedings of ACM ASSETS 2000, Arlington, VA. pages 129–136, 2000. 18. C. Balaguer, A. Gim´enez, A. Jard´on, R. Cabas, and R. Correal. Live experimentation of the service robot applications elderly people care in home environments In IEEE/RSJ. International Conference on Intelligent Robots and Systems (IROS’2005) 19. A.M. Sabatini, V. Genovese, and E.S. Maini. Be-Viewer: vision-based navigation system to assist motor-impaired people in docking their mobility aids In Proc. IEEE International Conference onRobotics and Automation (ICRA 2003), pages 1318–1323. 20. D. McRuer and E.S. Krendel. Mathematical Models of Human Pilot Behavior. ARGARDAG-188, 1974.

Suggest Documents