Improving Robotic Prosthetic Hand Performance Through Grasp Preshaping

Improving Robotic Prosthetic Hand Performance Through Grasp Preshaping Kyranou Iris NI VER S E R G O F H Y TH IT E U D I U N B Master o...
Author: Quentin Black
1 downloads 0 Views 52MB Size
Improving Robotic Prosthetic Hand Performance Through Grasp Preshaping

Kyranou Iris

NI VER

S

E

R

G

O F

H

Y

TH

IT

E

U

D I U N B

Master of Science School of Informatics University of Edinburgh 2014

Abstract Current commercial active prostheses for lower upper limbs are most commonly openloop devices using surface electromyography techniques for control. Two problems result from these design choices. One corresponds to the fact that the control of openloop systems requires repetitive, redundant actions, in order to ensure correct operation of the device, and the second problem relates to the Surface electromyography limitations and sensitivity to intra-patient differences. sEMG techniques to control a prosthetic limb require a training period and excessive mental effort by the patient, throughout the use of the device, factors that are related to the abandonment of the device. In this project, two ways of improving the performance of the prosthetic hand Robo-Limb, during the pre-shaping phase, are investigated with respect to the openloop and EMG control drawbacks. In order to close the loop we extract patterns from the current load feedback, that give us an estimation on the temporal distance travelled by the digits. We achieve a 71.6% reduction in the execution time for each pregrasp and a 91% accuracy in the estimation of the position in obstacle-free movement of the fingers, but the performance of out approach reduces to 64% when the finger collides with obstacles. Regarding the ease of use and control of the device, we propose a semi-autonomous visual-based system, where the hand automatically pre-shapes according to the recognized object, reducing the use of EMG control only to the final grip. The result is to almost double the amount of possible pre-grasps a patient can use, without involving any mental of physical effort. Both the proposed solutions serve as a proof of concept that certain changes in the design of modern commercial prosthesis can result to highly functional, sophisticated devices.

iii

Declaration I declare that this thesis was composed by myself, that the work contained herein is my own except where explicitly stated otherwise in the text, and that this work has not been submitted for any other degree or professional qualification except as specified.

(Kyranou Iris)

iv

Table of Contents

1

2

Introduction

1

1.1

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1.2

Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

1.3

Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

1.4

Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4

Background

5

2.1

Grasp pre-shaping . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5

2.1.1

Pre-grasp formations . . . . . . . . . . . . . . . . . . . . . .

7

Prosthetic Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8

2.2.1

Control of Prosthetic Devices . . . . . . . . . . . . . . . . .

9

2.2.2

Patient Training . . . . . . . . . . . . . . . . . . . . . . . . .

12

2.2.3

Proprioceptive Sensing . . . . . . . . . . . . . . . . . . . . .

13

2.2.4

Device abandonment . . . . . . . . . . . . . . . . . . . . . .

14

Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15

2.3.1

Robo-Limb by Touch Bionics . . . . . . . . . . . . . . . . .

15

2.3.2

Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18

Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18

2.4.1

Quick Response Codes . . . . . . . . . . . . . . . . . . . . .

19

2.4.2

Libraries for vision . . . . . . . . . . . . . . . . . . . . . . .

20

2.4.3

Robotic Operating System (ROS) . . . . . . . . . . . . . . .

21

Similar Visual Approaches in literature . . . . . . . . . . . . . . . . .

22

2.2

2.3

2.4

2.5 3

Methodology

25

3.1

Proposed System . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

3.2

Interface Implementation . . . . . . . . . . . . . . . . . . . . . . . .

26

3.2.1

26

ROS nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . v

3.3

3.4

3.2.2

ROS message topics . . . . . . . . . . . . . . . . . . . . . .

28

3.2.3

Pre-grasps configuration . . . . . . . . . . . . . . . . . . . .

28

3.2.4

Final trigger . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

Vision Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

32

3.3.1

Camera Position . . . . . . . . . . . . . . . . . . . . . . . .

32

3.3.2

QR codes Object Recognition . . . . . . . . . . . . . . . . .

33

Current Listener Controller . . . . . . . . . . . . . . . . . . . . . . .

34

3.4.1

Open-loop Solutions . . . . . . . . . . . . . . . . . . . . . .

34

3.4.2

Current patterns . . . . . . . . . . . . . . . . . . . . . . . . .

36

3.4.3

Extracting time . . . . . . . . . . . . . . . . . . . . . . . . .

36

3.4.4

Angular distance . . . . . . . . . . . . . . . . . . . . . . . .

38

4

Experimental Evaluation

41

5

Discussion and Conclusions

53

5.1

Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

53

5.2

Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

54

5.3

Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56

5.4

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

57

A Available pre-grasps by Touch Bionics (2014)

59

Bibliography

65

vi

Chapter 1 Introduction 1.1

Overview

Prosthesis use is documented as early as the ancient Egyptians times. In these early ages wood, leather and iron were used to create a device that was serving mostly aesthetic purposes. Hand amputees were using hooks or static hand effigies until the 16th century when the first mechanical hand operated by catches and springs is dated. According to Thurston (2007) it was not until after the second world war that discussions about prosthetic technology lead to the creation of hands with more functionality. These prosthetics were mechanical body-powered devices, that offered open and close finger positions. With the rapid development in the field of robotics, the prosthetic devices are constantly incorporating advances in technology and new materials in order to provide more functions to the patient than the traditional body powered devices and look and feel as similar to a natural limb as possible. Moving from the use of a simple hook to a human-like hand with individually actuated fingers was only possible because of the advances in the myoelectric technologies. A myoelectric-controlled prosthesis uses the electrical signals generated by the remaining muscles of the patient to control the movement the hand is going to perform. Most state-of-the-art commercial prosthetic hands such as Bionics (2014) i-Limb, BeBionic (2014), Vincent (2014) hand and Ottobock Michelangelo (2014) consist of five, individually actuated fingers and use electromyography techniques for their control. Figure 1.1 presents the design of these hands. Research in the field of human grasping is incorporated in the attempt of designing natural looking prostheses devices that offer functionalities similar with the human 1

2

Chapter 1. Introduction

(a) I-Limb hand

(b) BeBionics hand

(c) Vincent hand

(d) Michelangelo hand

Figure 1.1: A collection of state-of-the-art commercial prosthetic hands. Figure 1.1a is taken from Bionics (2014) website, 1.1b from BeBionic (2014) website, 1.1c from Vincent (2014) website, and 1.1d from Schweitzer (2009)

.

hand operation. Different taxonomies regarding the shape the hand naturally forms into, in order to grasp different objects, have been proposed, based on observations of the natural human grasping. The classification of the grasps in combination with evidence that suggests a relation between the object’s properties, such as shape and size, and the hand’s pre-shapes, has motivated prostheses manufacturers to provide a collection of possible pre-shapes of the hand, carefully selected to correspond to grasping and manipulating specific objects the patient uses in everyday life.

1.2. Motivation

1.2

3

Motivation

Most state-of-the-art commercial prostheses control is based on monitoring the electrical signals from the human muscles. Before adjusting a myoelectric prosthesis the patient is examined in order to locate the most appropriate muscles to be used for the control. Then a training period follows were the patient learns the possible pre-grasps that are available and is trained to perform specific muscle motions that trigger these pre-grasps. The myolectric-controlled devices require great mental effort from the patient who needs to remember which movement he needs to perform in order to trigger the desired pre-grasp. Rarely can the patient effectively control and remember more than four different pre-grasps which is a very limited amount compared to the pre-grasps that are programmed and offered by the commercial prosthetic companies, and cannot fully take advantage of the hand’s capabilities. This demanding procedure often does not feel natural to the patient, leading to the abandonment of the prosthesis, in favour to a simpler, body-powered hook. Moreover, many of the modern prosthesis limbs do not have angular position encoders in the fingers. The hand overcomes this drawback by returning to a neutral, fully opened position of the fingers every time before a pre-grasp is executed, even if the fingers are going to be positioned in exactly the same way as in the previously executed pre-grasp. The resulting transition between different pre-grasps is not as smooth as a healthy human hand operation and the total amount of time needed to complete the pre-grasp is significantly increased.

1.3

Approach

In this project two different ways of improving the prosthetic hand i-Limb by Touch Bionics are investigated, that target the two problems described; the limitations of myoelectric control and the lack of proprioceptive sensors in the hand. Vision-based control system: A semi-autonomous way of choosing the pre-grasp is suggested, in order to remove the mental effort of the patient on the execution of the pre-grasp. A simple vision-based system is proposed, where a camera mounted on the prosthesis captures the image of the desired object to be grasped and the hand automatically chooses and executes the suitable pre-grasp without any interference of the patient. The only thing the patient needs to control is the

4

Chapter 1. Introduction

transportation of the hand in a way that aligns it with the desired object and triggering the closing and opening of the fingers around the object, via electromyography techniques. We suggest that a vision-based system is more intuitive and allows the user to utilize all the available functionalities of the hand. Closed-loop control Regarding the lack of position encoders, a solution based on the current load patterns is investigated, in order to close the loop and have an estimation about the position of each finger after each pre-grasp is executed. The suggested solution is based on the way the Robo-Limb hand connects to the computer. Our approach is to estimate the time the finger moves after each command is executed and map this time to an angle distance from the fully open position. We provide a step-by-step description of the procedure we followed, in order to estimate real-time position of each finger and compare our approach with the open-loop solutions currently employed.

1.4

Thesis Outline

Chapter 2 covers the background information and explains the terms that are used throughout this project. It includes information about the hardware we used to complete the project and the general software libraries and frameworks we used as a base for our implementation. The methodology we used in order to address the two problems stated in the motivation is described in chapter 3, including information about the implementation of the vision control and the current listener estimations for the proprioceptive sensing. In chapter 4 the experimental set-up that we used to evaluate our approach is presented along with the results of the experiments and data analysis. Finally chapter 5 summarizes the contribution of this project, discusses the limitations and suggests future extensions based on our work.

Chapter 2 Background For this thesis we are going to focus on upper limb prosthetic devices and their functionality, communication with the human body and ways to improve the performance of the commercial robotic hand Robo-Limb, specifically in the preshaping stage of grasping. In this chapter some background information on the terminology and knowledge used as foundation of our work is presented, along with related work and the differences to our approach.

2.1

Grasp pre-shaping

The human hand is a complicated organ with over 21 degrees of freedom, capable of completing a wide range of tasks, from the simplest to the most elegant that require high precision in the position of the fingers. The task of reaching and grasping an object has been a subject of research for many years, and has been approached by various disciplines, including anatomy, physiology, psychology and engineering. The natural approach to grasp an object can be divided into different phases. The first step, before the initiation of the grasping execution, is the decision phase, when the patient decides the object to be grasped. After the object is determined, Supuk et al. (2011) suggest a three phase division of the grasping consisting of the hand acceleration phase, the hand deceleration phase and the final closure of the fingers. The hand acceleration and deceleration phases can be seen as a common phase corresponding to the transport of the hand towards the object. The hand transport and the grasp pre-shaping phases seem to be working in parallel. Research by Wing et al. (1986) has shown that a-priori knowledge of the object 5

6

Chapter 2. Background

determine the way it is going to be approached and held by the human hand. In fact the hand pre-shapes during the approaching movement according to the object’s properties and anticipated use. The next phase is the reach-to-grasp phase, consisting of the transport of the hand and its formation before it touches the object. The hand transport or reaching phase is related with the movement the hand performs when it reaches towards the object to be grasped, while the grasp phase corresponds to the fingers formation of a specific hold that will be used to grasp the object Rosenbaum (2009). Finally the grip phase, involves the final movement of the fingers touching the object with forces that allow the patient to pick it up and manipulate it. For the purposes of this thesis, the full grasp and hold task can be divided into three separate phases, as shown in figure 2.1. Previous to any motion of the hand is the phase of choosing the object that is going to be grasped. After the object is chosen the reachto-grasp motion is initiated which involves two parallel phases, the hand transport and the pre-shape phase. After the hand has reached close to the desired object and has formed accordingly to its shape and use, the final phase is triggered, where the fingers close around the object allowing its manipulation by the user.

Figure 2.1: Grasping phases, from the decision of the object to be grasped, until the final grip phase.

Temporal experiments by Jeannerod (1984) on the natural prehension movement suggest that during the transport phase the hand initially accelerates towards the object and, when 75% of movement time has passed, a low speed phase starts until the hand reaches the target. During this transport period the hand’s aperture first opens and then, during the deceleration part of the movement the fingers start closing in order to

2.1. Grasp pre-shaping

7

touch the object when this is reached (Santello M, 1998). Schettino L.F and H. (2003) suggests that the hand’s aperture and shape changes with respect to the properties of the object to be held, such as size, weight and surface features. For the purposes of this thesis the term pre-grasp refers to the stage where the hand approaches the object that is going to be grasped and before the hand is commanded to close the fingers and touch the object and pre-shape is the formation of the fingers during the pre-grasp stage.

2.1.1

Pre-grasp formations

Grasp synthesis is a subject approached in both analytical and empirical ways, as described by Sahbani et al. (2012). The analytical approach focuses on the kinematics and dynamics of the pre-shape and calculates the fingers’ position on the object and the applied forces on impact. The empirical approach involves learning and classification methods, focusing either on analysing and imitating a grasp performed naturally by a human or observing the grasped object and associating different object properties to different hand pre-shapes. Schlesinger (1919) was the first to suggest a categorization for the human preshapes into six different grasp types that describe the different positions of the fingers and how the object is grasped. These categories are, cylindrical, tip, hook, palmar, spherical and lateral and the relevant hand formations are shown in figure 2.2.

Figure 2.2: Grasping categories by Schesinger Schlesinger (1919). Figure taken from Taylor and Schwarz (1955)

Later, Napier (1956) suggested a two class division of power and precision grasps

8

Chapter 2. Background

upon which all the different variations of grasps are based. Cutkosky (1989) combined the two taxonomies and expanded the previous work including parameters such as security, stability, compliance, power and precision in order to describe the grasps used in manufacturing tasks. Finally, Feix et al. (2009) suggests a more extensive grasp taxonomy, as a base for modelling the hand pre-shapes used in everyday life, that can be further utilized for the design of a more natural looking and feeling robotic or prosthetic hand. This work was the base of experiments performed to establish the most common pre-shapes used by healthy people in their everyday life or in their workplace by many researchers. Zheng et al. (2011) performed experiments to find the frequency of the use of specific pre-shapes by a professional machinist and a professional house-maid, using the Cutkosky (1989) taxonomy. Currently, prosthetic companies utilize information about natural grasping taxonomies and offer a variety of pre-programmed pre-shapes that have been found to be the most appropriate for use in everyday life of the patient. In appendix A a collection of the pre-shapes available by company Touch Bionics (2014), that correspond to specific objects and tasks, is presented.

2.2

Prosthetic Devices

The prosthetic devices that are currently available can be divided into two categories, passive and active, based on their functionality levels. Passive devices do not have any moving part; they are mainly used for aesthetic purposes and limited functionalities such as pushing, balancing and supporting objects. Their design can vary, in the case of upper limb prostheses, from simple hooks to accurate hand replicates. Active devices incorporate mechanical and electronic elements in the attempt to offer a wider range of functionalities. Although the external design of these prostheses can be the same as the passive ones, they offer to the patient the capability to move individual parts of the prosthesis, thus extending the functionality of the hand and its interaction with everyday objects. These differences in functionality and freedom of movement of the different devices lead to different needs in the way the devices are controlled by the patient.

2.2. Prosthetic Devices

2.2.1

9

Control of Prosthetic Devices

The control of the prosthetic devices evolved in parallel with the technology used for the prostheses construction. For the passive prostheses the control is limited to the transportation of the hand to the desired position. The first active prostheses devices that had mechanical parts moving individually were body-powered controlled. Cables originating from the device are connected to different healthy parts of the body. An example of body-powered prostheses are hook hands that are strapped to the opposite, healthy shoulder of the patient, as shown in figure 2.3. Different motion in the shoulder results in opening and closing of the device.

Figure 2.3: Body powered prosthetic mechanism patented by Selpho Selpho (1857)

Active prosthesis can also be externally powered by motors. There are two control methods available for externally powered devices, the switch control and myoelectric control. In the switch control the prosthetic hand has buttons that control whether individual parts of the limb are allowed to move or not. The patient chooses the desired movement, toggles the buttons and commands the hand to perform the motion in a way similar to the body-powered approach or just by moving the limb parts with his healthy hand. This method allows a wider variety of motions, dependent on the amount of switches, and different sequences of switch toggling can be mapped into different motions of the hand. The myolectric control of the prosthetic limb utilizes the electrical signals from the healthy muscles in the residual limb. Electrodes are placed on the surface of the skin and record the electrical signals generated by the contraction of healthy muscles and different signals or signal sequences are translated to different motion patterns of the limb. Figure 2.4 presents the position of the electrodes on the forearm of a patient, in order to control the electric hand. Until now, the control techniques that were presented are non-invasive. Carmena

10

Chapter 2. Background

Figure 2.4: Image taken from Kumar (2013), showing the position of the electrodes on the patient’s forearm, that are used to control the electric hand.

(2013) suggests an invasive way of communicating with the prosthetic limb based on brain-machine interfaces that allow direct translation of neuron activity in specific areas of the brain to control signals for the prosthesis. The communication between the neurons and the machine is feasible through microelectrodes directly connected to the neurons of the patient. The advantages of a body-powered system are that it is less expensive, lighter and more durable than the externally powered one. It also has greater sensory feedback and needs less training time. On the other hand, the externally powered devices need less body movement and energy from the patient in order to operate, they offer more functionalities and precision in the grasp and usually are more natural looking than the body-powered devices. 2.2.1.1

Surface Electromyography

The most popular control interface in modern commercial prostheses utilizes a myoelectric technique called surface Electromyography (sEMG). This works with electrodes that are placed on working muscles, record the electrical signals of the muscles and translate the different signal patterns to different hand formations. Before the patient wears the prosthesis, the most appropriate muscles that generate strong electrical signals must be carefully chosen and the sEMG electrodes are connected to those. Different electrodes connect to muscles that are activated by a different kind of motion, and each one of these signals is perceived as a different input. Following there is a training period, when the patient learns to contract his working muscles in a way that generates specific signal patterns which are mapped on different hand movements. More information about patient training is presented in section 2.2.2.

2.2. Prosthetic Devices

11

Surface Electromyography offers a non-invasive,easy to connect and use way of communicating with the prosthetic hand, characteristics that make it very popular in the myoelectric control of most of the advanced commercial externally powered prosthetic limbs. Besides these advantages, EMG techniques introduce significant difficulties. The training of the patient as well as the everyday use of these myoelectric devices demands extensive mental effort by the user, who needs to constantly think about the combination of movements that will command the hand to form in a desired way. Performing the different muscle triggers on its own, can be strenuous and takes some time. Moreover, it is clear that the patient cannot physically perform and mentally remember numerous different patterns of muscle movement. In practice, no more than four pre-grasps can be effectively commanded this way, as was also suggested by the Touch Bionics clinician Goodwin (2014). In order to minimize the mental effort of the patient, research is focused on making the process of controlling the hand as intuitively as possible. by analysing the electrical signals recorded by the muscles and classifying them into object related preshapes. Soares et al. (2003), extracted EMG patterns in order to distinguish between four different human arm movements. Brochier et al. (2004) classify 12 different objects, using only the EMG activity on the macaque monkeys, while Bitzer and van der Smagt (2006) classified the human fingers opening and closing actions based solely on sEMG readings. Fligge et al. (2013) extracted information about the object’s size and weight through the pre-shape phase, using sEMG data from healthy human subjects. Although the focus of the research on EMG patterns is to provide more intuitive control of the prostheses, they still require the patient to be able to perform the natural movement, which is not always the case, especially if someone has lost the limb long ago, or even was born without it. Our approach focuses on reducing the mental effort from the patient, but also the physical effort of performing the different muscle triggers for different pre-shapes, by proposing a system that would pre-shape autonomously corresponding to the object recognized by a camera positioned on the hand. The sEMG interface is only used to trigger the closing of the fingers around the object and the opening of the fingers to release it. This way the patient does not need to reproduce a motion that a healthy human would perform, but only be able to activate two different muscle groups that can command the opening and closing of the fingers. Another significant disadvantage of the sEMG methods relates to the quality of

12

Chapter 2. Background

the recorded signals. Specifically, as reported by Castellini and van der Smagt (2009) and Adriano de Oliveira Andrade (2001) sEMG signals are subject to change depending on inter-user differences of arm shape, arm posture, electrode displacement, muscle fatigue and skin impedance. According to Kuiken et al. (2003), subcutaneous fat on the patient’s forearm can also affect the signals’ strength.

2.2.2

Patient Training

As mentioned in the previous section, before the prosthetic limb is ready for use, the patient must go over a training process. A description of the different steps of the training procedure follows. Locate muscle candidates to connect the EMG electrodes The first step on this procedure is to find the healthy muscles that are going to be monitored by the sEMG electrodes. The muscle activity must be of a significant strength and most devices offer some gain and threshold parameters that can help enhancing the electric signals received from the muscles. These muscles can be chosen to be anywhere in the patients body. Learn to activate the correct muscles When the electrodes are connected to the chosen muscles, the patient is trained to move his hand in a way that activates only one region of muscles every time. Each of the different muscle signals will correspond to a different command sent to the hand. Learn to trigger the pre-shapes The final step is to learn how to trigger different preshapes, by performing combinations of muscle contractions. These triggers can be holding the hand in the open position for a specific amount of time, cocontraction, which is the simultaneous contraction of all the input muscles, a double or a triple impulse. An impulse motion is defined as the sequence of the opening motion contractions, followed by a relaxation of the hand. So double impulse is performing this sequence twice. The amount and type of pre-shapes the wearer learns to perform depend on his everyday needs and his capability to generate the necessary magnitude of electrical activity only by the correct muscles. Since, as mentioned before, the sEMG technique does not allow to command more than four different pre-shapes, most of the commercial companies provide alternative

2.2. Prosthetic Devices

13

ways of sending pre-shaping commands to the hand. Touch Bionics lets the patient use a smartphone application to choose the pre-grasp he wants to execute and the command is sent via bluetooth to the limb. Although this solution allows a full exploration of the hand’s functionalities, the patient is highly dependent to his smartphone, thus complexity is added to the grasping procedure.

2.2.3

Proprioceptive Sensing

Although many research prosthetic limbs include encoders that give feedback about the angle of the fingers, most of the commercial prosthetic hands are still open loop systems and no sensory feedback is currently used for their control. As an open-loop system, the output has no influence or effect on the control action of the input signal. Therefore, the hand receives and executes a command to move a finger, but does not know the finger’s position after the motion is completed. Having an accurate model of the hand could partially solve this problem, since we can estimate the final finger position after each command execution using forward kinematics computations. This solution is very brittle, since there is no proprioceptive feedback to help compensate for disturbances or changes in conditions. For example, if the finger encounters an obstacle during its movement it will stop in a position far from the one estimated by the forward kinematics, but the controller does not have this knowledge. Thus, if the controller sends the next command assuming a false position the hand will end up moving in a non-desired way and there is no way of correcting this error. A simple solution that is adopted by many prosthetic devices to overcome this problem is to always bring the hand in a neutral fully open position before the execution of a new pre-shape, by sending an opening command for each finger with time parameters that exceed the sufficient opening time for the fingers, in order to ensure that the digits will reach the fully open positions. Although this sequence of commands leads to a correct motion of the hand, it adds unnecessary movement of fingers that would now need to move, or required only a small adjustment in their position. For example, even if the fingers are in a closed position and the next pre-grasp commands them to end up in the same closed position, the controller of the hand will first command them to fully open and then close again. These extra movements, besides looking unnatural, increase the completion time of each pre-shape and wear down the device.

14

Chapter 2. Background

Open loop control is very simple, straight forward and does not need extra electronic elements that would be used to sense the environment and provide feedback, but limits the control of the fingers to the empirical sequence of motions as explained in the previous lines. One advantage of knowing the position of each finger is that we can transition more naturally between different desired positions and use trajectory planning algorithms to avoid collisions between the fingers. Knowing the position of the fingers is important for a more sophisticated control of the hand. It allows to transition between pre-shapes faster and in a more natural way, without having to perform the fully opening motion of the hand every time a new pre-shape is executed. This also reduces the product’s damage over use. Moreover, knowing the position of the fingers can be used for trajectory planning when we want to perform a sequence of movements with the fingers and need to ensure that there will be not fingers collisions during the motion.

2.2.4

Device abandonment

Studies on the reasons of abandonment of upper limb prostheses made by Biddiss and Chau (2007) showed that 88% of patients choose to permanently stop using the prosthesis because they find it too difficult or tiring to use. The first reason of abandonment is that 98% of the patients consider to be equally or more functional without the device. Other reasons for prosthesis abandonment with a high rating of importance are the device weight, lack of comfort and lack of sensory feedback. Even with the technology available today, the most frequently used prosthetic terminal devices are still the split hook type devices, because they offer a simple and easy way to accomplish typical, everyday tasks. These results highlight the need of a more intuitive way of controlling the prosthetic hands that at the same time takes advantage of the wide range of functionalities the state-of-the-art prosthetic hands offer. The proposed approach focuses on introducing a vision-based controller to recognize the object the hand approaches and choose between the available pre-shapes autonomously. This system would reduce the use of sEMG technology only for the final opening and closing of the fingers on the final grip phase, thus removing a big amount of mental effort from the patient and simplifying the training procedure. Moreover, the amount of possible pre-shapes with the proposed system is only limited by the hand’s capabilities and not the memory of the patient, allowing a full use of the available hand

2.3. Hardware

15

functionalities. Comfort of the patient and natural looking and working prostheses is the motivation behind the closing-the-loop approach.

2.3

Hardware

The devices we used for this project are the prosthetic hand Robo-Limb (section 2.3.1), a simple Logitech camera (section 2.3.2) for the video capture and the electromagnetic tracker polhemus (see chapter 4) for the evaluation of our position estimation.

2.3.1

Robo-Limb by Touch Bionics

The prosthetic hand Robo-Limb developed by the company Touch Bionics is shown in picture 2.5

Figure 2.5: i-Limb by Touch Bionics

Robo-Limb is an externally powered, multi-articulating hand, with five individually powered digits and six degrees-of-freedom, five for the opening and closing of the fingers and the sixth for the abduction-adduction movement of the thumb. Some technical information about the hand is presented in table 2.1 Control of Robo-Limb

The control of the hand is based on surface electromyography technology (sEMG). As described before, Touch Bionics provides a collection of different grasp patterns

16

Chapter 2. Background

Technical Information Voltage

7.4 V (nominal)

Max. Current

5A Rechargeable lithium polymer; 2400 mAh ca-

Battery Capacity Max hand load limit (static limit)

pacity; 1300 mAh capacity 90kg / 198lb

Finger carry load (static limit)

32kg / 71lb

Time from open position to full power grip 1.2 seconds Table 2.1: Technical information about Robo-Limb

suitable for grasping different objects and for different tasks (see also appendix A). Each pre-shape is mapped on a signal pattern, so whenever the patient moves his hand in a specific way the hand performs a specific grasp. In order to record these signal patterns, two electrodes similar to those shown in picture 2.6, are placed on active, antagonistic muscles and capture the electrical signals produced by the muscle while the patient moves.

Figure 2.6: Electrodes used to capture signals generated by the muscles.

The information captured by the EMG electrodes is used in two ways. First the EMG signal is captured, amplified and analysed to recognize a pattern and a specific pre-grasp is chosen and performed by the hand, and then, until the patient chooses a new pre-grasp by performing a new predefined sequence of moves, the signals are translated in a closing and opening movement of the fingers.

CAN connection

The Robo-Limb prosthetic hand communicates with the computer via a CAN network. CAN, which stands for Controller Area Network, is an International Standardization Organization (ISO) high-integrity serial bus system for networking intelligent

2.3. Hardware

17

devices. Originally, it was developed by Bosch (2014) as the standard in-vehicle network to connect electronic devices in vehicles using point-to-point wiring systems. The prosthetic hand is plugged to the computer with a PCAN-USB connector and receives CAN commands with fields as those shown in figure 2.7.

Figure 2.7: An example of the command structure as it is sent to the Robo-Limb. It commands the ring finger (id=0x104) to close (state=1) with maximum speed (297 = 0x129)

The id of the message is defined to be the digit number that goes from 1 to 6 (1 to 5 for the five digits and 6 for the thumb abduction/adduction) and data is the concatenation of state and desired velocity of the digit’s movement. The state variable can be set to 0 for commanding stop, 1 for close and 2 for open. The PCAN-BUS also allows the hand to publish feedback information about the motors activity. The only available feedback information from Robo-Limb is the current load and state for each digit. Every 20 milliseconds a message of the form shown in figure 2.8 is published, containing the number of the digit as the command mailbox id and a message payload of two words (2x16 bits) The high byte of high word contains the thumb rotator switch status which is 0 if the thumb is not fully palmar or lateral and 1 if the thumb is fully palmar or lateral. The low byte of the high word contains as previously the digit status and can take one of the following values: • 0, for stop • 1, for closing • 2, for opening • 3, for stalled closed • 4, for stalled open The low word contains raw A/D value (12bit) of measured motor current draw. An example of the feedback message received from the ring finger is shown in figure 2.8. Touch Bionics uses the current load for stopping the motion of the fingers when hitting an obstacle, in order to prevent damage of the hand or the object to be grasped.

18

Chapter 2. Background

Figure 2.8: An example of the feedback message structure as it is sent by the Robo-Limb. It is the response of the ring finger (id=0x204) indicating that it is closing (state=1) with measured current draw of 87.2mA (1904/21825 to convert to Amps)

When the levels of the current magnitude exceed a threshold the finger is considered to be in contact with an obstacle and a stopping command is sent. In the process of improving the hand performance, one of the goals of this project is to try and close the loop in the control of the prosthetic hand, by extracting information about its movement and the existence of obstacles from the digits’ current load and state, as this is read from the PCAN-BUS. The process we follow, in order to provide proprioceptive feedback is presented in section 3.4.

2.3.2

Camera

As mentioned in the introduction, the second goal of this project, besides the proprioceptive feedback, is to investigate a semi-autonomous vision-based approach in the pre-shaping phase, where a camera is mounted on the prosthetic hand and used to recognise the objects and trigger the corresponding pre-shape automatically. For the vision purposes a simple low cost webcam was chosen. The Logitech B910HD web camera allows HD 720p video at up to 30 frames per second, and provides autofocus technology with good visual quality in low light at multiple distances. The camera is mounted on the back of the hand, in the base of the fingers as shown in figure 2.9.

2.4

Software

Following is a description about the software and libraries we used throughout our project. The QR coding system is presented along with the vision libraries used for capturing video and processing the image stream. Also, there is a quick description of the Robotic Operating System ROS (2014), which provides the connection between the different hardware and software components employed to this project.

2.4. Software

19

Figure 2.9: Position of the webcam on the prosthetic hand.

2.4.1

Quick Response Codes

As a first simple application in the image recognition experiments, we labelled a selection of objects with quick response (QR) codes and implemented a quick recognition algorithm to recognize the QR codes and command the hand to perform the corresponding pre-shape. Quick response code is a type of two-dimensional matrix barcode which has the form of a small white square with black geometric shapes, as the one shown in figure 2.10. Each of these codes can represent different information, such as a URL, a phone number or any text.

Figure 2.10: An example qr code

The QR codes labelling is used as a proof of concept for evaluating the vision-based system we propose and describe in section 3.3. QR codes have certain properties that make them the perfect candidates for this application. Firstly, the QR codes recognition is simple, while efficient algorithms already exist, offering fast recognition. Due to their square shape 100 times more information is encoded in a smaller area than simple 1-D barcodes, and the three large

20

Chapter 2. Background

squares are used for symbol alignment and orientation-invariant recognition. Moreover, QR codes have the capability to restore data due to dirt or damage of the code. The Reed and Solomon (1960) Error correction algorithm is used in the creation of the QR codes to provide four different error-correction levels. The different levels of correction allow resistance against damage between 7-30%. These error-correction levels affect the amount of information each code can hold. The simplest QR code size is 21x21 squares and can hold from 17 to 41 numeric information. This is also the size we used in our experiments.

2.4.2

Libraries for vision

The libraries we used for the video capture, image processing and recognition of the QR codes are OpenCV and Zbar. OpenCV (2014) library is an open source, cross-platform computer vision library that focuses on real-time image processing techniques, with many computer vision algorithms and utilities that can be used and built upon. It provides the connection with the input devices such as cameras and shows the data received from them in real time. Zbar (2014) library is used for the recognition and decoding of the QR codes. It uses the webcam as a barcode scanner to decode the barcode images. Decoding of 1D barcodes via laser technology works using a light sensor that passes over the barcode, recognizes dark and light areas based on the reflected light, and decodes the symbol. ZBar implementation uses the same techniques but with a camera sensor while each pixel of an image is treated as a sample from a single light sensor. A high-level description of the modules provided by ZBar for scanning, decoding and assembling the data is presented in figure 2.11.

Figure 2.11: Steps of the ZBar library from image capturing to qr symbol decoding.

The image stream from the video source is scanned in order to recognise the changes in the intensity of the pixels values. These pixel values are scanned

2.4. Software

21

linearly and the bar width information of the patterns that appear on the QR codes is passed to the decoder that extracts the data stored in the QR symbol.

2.4.3

Robotic Operating System (ROS)

The Robo-Limb hand, the camera, polhemus tracking device and the computer are connected using ROS, an open-source robot operating system created by Quigley et al. (2009). ROS is a framework oriented to robotics research and applications and, as an operating system, provides hardware abstraction, low-level device control, implementation of commonly-used functionality, message-passing between processes, and package management. It also comes with a collection of libraries and tools that allow obtaining, building, writing, and running code across multiple computers. OpenCV library is tightly integrated in the ROS system and used for reading data published by cameras of various types and apply various algorithms for image processing. The main ROS concepts used within the context of this project are presented in the following lines. Packages Packages are the main organization unit in ROS. Each different project is organized as a package and may contain ROS runtime processes (nodes), a ROSdependent library, datasets, configuration files, or anything else that is usefully organized together. Nodes Each project organized as a package, consists of different computational processes that are called nodes. A node can contain the code for a specific functionality of the robot, for example the motion control, for reading a specific sensor’s data or for connecting external devices together and exchange messages between them. Messages Nodes communicate with each other by passing messages. Messages are simple files that define the data structures that are published and received in ROS. Master The ROS Master provides name registration and lookup to the nodes. Without the Master, nodes would not be able to find each other and exchange messages. Topics Topics are names used to identify the content of the message. When a node sends a message it publishes it to a given topic. The nodes that are interested

22

Chapter 2. Background

in the content of this topic subscribe to that. Multiple publishers or subscribers can exist for a single topic and a single node may publish and/or subscribe to multiple topics. Figure 2.12 shows the structure of a ROS package containing two nodes, one publishing data to and one subscribing to read data from the common message topic.

Figure 2.12: Example of the structure of a ROS package containing two nodes, one publishing data to and one subscribing to read data from the common message topic.

rqt

ROS also offers some tools for visualization purposes. rqt is a Qt-based framework for developing graphical interfaces for a robot. One of the rqt plugins that we used in this project is rqt plot, which is a GUI plugin that allows visualizing numeric values from a topic in a 2D plot. It is useful to monitor encoders, voltages, or anything that can be represented as a number that varies over time. An example of the rqt plot showing the current load of the index finger during its movement is shown in figure 2.13.

2.5

Similar Visual Approaches in literature

Mounting a camera on the hand of a robot is not a novel idea. The Baxter (2014) robot has a camera located on either of its hands. Yet, Baxter’s hook-like, 1 degree of freedom grippers are used for simple pick and place tasks and are not able to perform a complex grasp. At the present time there are no commercial prosthetic hands that employ vision as part of their controller.

2.5. Similar Visual Approaches in literature

23

Figure 2.13: rqt plot plot of the current load of index finger.

The only similar approach that has been done before is by Dosen et al. (2010). A cognitive vision system is implemented by adding a camera and an ultrasonic sensor on a CyberHand prosthetic hand Cyberhand (2014), providing the functionality of generating nine different commands with success rate of 84%. Some issues on their approach had to do with the counter-intuitive use of a laser to point towards the goal by the user, because the hardware positioning was not aligned with the forearm axis, and the slow image processing time resulting to big response delays. Although this project was published in 2010, there is yet no published progress.

Chapter 3 Methodology As described in the background (chapter 2) this project addresses two issues regarding the design and control of modern prosthetic devices; the limitations of the sEMG controller, as described in section 2.2.1.1 and the lack of proprioceptive feedback in the control of the hand. Following is the descriptions of the methods we propose, in order to deal with these drawbacks individually.

3.1

Proposed System

The initial state in our controller is the vision state, where the object that is going to be grasped is recognised from the camera input to belong to a class of predefined objects. As soon as the object is recognised the second state is autonomously triggered. In this state, the pre-shape that is mapped to the recognised object is executed by the hand. Up to this state the user controls only the hand transport to a position close to the object. The final state, which corresponds to the grip phase, is fully controlled by the user via sEMG signals. The patient commands the opening and closing of the fingers in order to grasp the object, pick it up or manipulate in any other way. Eventually, the hand performs a different pre-shape when it recognises a new object in the camera caption. The diagram 3.1 shows the separate stages of the grasping of an object as described. . 25

26

Chapter 3. Methodology

Figure 3.1: Flow diagram showing the stages from seeing the object until grasping.

Figure 3.2: ROS nodes and messages topics of our project

3.2

Interface Implementation

The hardware used in our experiments consists of a computer running the linux distribution Ubuntu 12.04, the prosthetic hand Robo-Limb, a webcam and a Polhemus Liberty tracking device for evaluation reasons (more information about polhemus device on section 4). All these devices are connected using the ROS interface and communicate via publishing/subscription to messages. The structure of our package with all the nodes and messages that are exchanged between the nodes and the devices we used -the camera, the Robo-Limb hand, polhemus tracker and the computer - is shown in figure 3.2.

3.2.1

ROS nodes

Following is a brief description of the nodes used by our system.

3.2. Interface Implementation

27

camera gscam Gscam (2014) package is a ROS camera driver that uses gstreamer, a framework for creating streaming media applications, to connect to devices such as webcams. This node sets up a ROS camera interface and publishes the unprocessed image data. image decoder The image decoder subscribes to the camera data that are published by the gscam node and uses OpenCv and Zbar libraries to process these data. The camera stream is scanned for the existence of QR-codes corresponding to numbers from 1-12, and when one of them is found the decoded number is published as the pre-grasp type number. In figure 3.2 the gscam and the image decoder nodes are grouped in one node called image processing. robotcommands The robotcommands node is responsible for translating the pre-grasp type number into a CAN message that is executed by Robo-Limb. It listens to the message that contains the information about the grasp type, and when it receives a new pre-grasp type, it loads the parameters defining the digit id, the desired state and velocity that describe the pre-grasp and translates it to CAN commands that are going to be executed by the prosthetic hand. robotstatus The robostatus node provides feedback information about the state and current levels of the five digits. It listens to the PCAN BUS and publishes the feedback information every 20 ms. currentListener The currentlistener node subscribes to the feedback information that is published by the robotstatus node and processes the information in order to extract the execution time of each command and whether the digit had encountered an obstacle. The way this is calculated is explained in section 2.3.1. After the unobstructed time of the command is estimated the node publishes for each digit the temporal distance from a fully opened and a fully closed position and the angle the finger has travelled during this command with respect to the fully open position. emg simulator Since we cannot use real EMG techniques for our experiment the emg simulator node is implemented as a substitute for the commands that would be commanded by the patient. Accordingly, it waits for a keyboard input, that corresponds to an opening motion of the finger, when the ‘a’ key is pressed and a closing movement when pressing the ‘s’ key. This node publishes a pre-grasp

28

Chapter 3. Methodology

command that is processed by the robotcommands node and send as a CAN message. polhemus The polhemus node is used to connect the polhemus tracking device with the computer and publishes the (x, y, z) coordinates corresponding to the position of each polhemus sensor.

3.2.2

ROS message topics

The message topics that are exchanged between the different nodes are the following. camera/image raw is published by the gscam node and holds the unprocessed image data from the camera stream. pregrasp is published by the image decoder node and is a number between 1-12, which is then translated to a pre-shape command by the robotcommands node. pregrasp EMG is published every time the emg simulator listens to a key press. Robotcommands node is subscribing to this message and translates it to a CAN command. robotfeedback is the message published by robotstatus, and holds status and current load information about each digit. polhemus holds the (x,y,z) coordinates values for each digit, as they are read by the polhemus device. It is used as an input to the rqt plot node that shows the difference between the real angle the digit has travelled and the estimated one by the currentListener, which is published in the timeleft topic. timeleft is published by currentListener node, and holds information about the angular distance in time that the finger has travelled from the fully opened and fully closed positions.

3.2.3

Pre-grasps configuration

Currently, commercial prosthesis companies offer a collection of pre-programmed pre-grasps that are available by the prosthetic limbs and the patient is trained to use some of them. These pre-grasps correspond to suitable grasps for everyday objects

3.2. Interface Implementation

29

and fall into six general categories of grips depending on the object type and use; hook, power, lateral, cylindrical, tripod and pinch grip. These pre-shapes correspond to the main pre-shapes in the Cutkosky (1989) taxonomy. Based on these categories, Touch Bionics (2014) offers a collection of 24 standard and custom pre-grasps, presented in appendix A. Not all of these pre-shapes are unique. For example, in the precision pinch grip category, four slightly different pre-shapes are available, based on the pinch formation of the hand, offering the same general grip, but different pose of the hand. The same happens with the available tripod grip options. Each pre-grasp corresponds to different formation of the fingers and allows movement only to specific digits. For example, for the tripod grasp initially the hand is in an open position, as shown in figure 3.3a and only the thumb, index and middle finger are allowed to move. When the closing command is triggered, these three fingers close as shown in figure 3.3b, while the rest of the fingers stay still. Consequent opening and closing commands will end in alternation between the states shown in 3.3a and 3.3b respectively.

(a) Open tripod pre-grasp pose

(b) Close tripod pre-grasp pose

Figure 3.3: The tripod grasp in open and close state. 3.3a shows the pre-grasp when first executed and 3.3b shows the formation of the fingers after a closing command has been performed.

The nine objects chosen for this project are presented on table 3.1, along with the names and categories of the pre-grasps that are considered as more suitable for their manipulation. Videos showing each grasp used with the objects can be found along with the code of this project, in the webpage: https://wcms.inf.ed.ac.uk/ipab/slmc/research/ undergraduate-and-msc-projects/touch-bionics-grasping-hand-project. The choice of these specific pre-grasps used in our experiments was based in out attempt to cover a wide range of the available categories of pre-shapes, but also operate all the fingers in their whole range of movement. Besides the formation category, the

30

Chapter 3. Methodology

object

QR code

grasp name

category

steps

cup

1

cylindrical

cylindrical 2

fork

2

lateral

lateral

2

crisps

3

chuck closed

tripod

2

keys

4

standard precision pinch opened pinch

2

mouse

5

mouse

-

3

glove

6

donning or doffing a cover

-

1

keyboard

7

index point

-

1

plate

8

open palm

-

1

spray

9

two finger trigger

-

3

Table 3.1: The objects used in our experiment and the suitable grasps for their manipulation, according to Touch Bionics. The category corresponds to the Schlesinger (1919) taxonomy of the grasp and the last column is the execution steps needed to complete each pre-shape. Videos with the pre-shapes used to grasp each object are included with the supporting material of this thesis. Videos that show each grasp used on the objects can be found in the webpage: https:wcms.inf.ed.ac.ukipabslmcresearchundergraduate-and-msc-projectstouch-bionicsgrasping-hand-project

available pre-shapes can also be described by their execution steps. Passive grasp The first category is the passive pre-grasp, that has one execution step, the formation of the hand and does not allow any movement on the fingers. Two-step grasp The two step execution category is the more common and after the pre-shape allows the fingers to either close or open, based on the EMG input. Triple-step grasp The third category requires an extra step to complete; first the preshape is triggered and performed by the hand. When the hand is commanded to close it moves the fingers until it grasps the object, and any subsequent closing command will end in the closure of a subset of the active fingers by the pregrasp. An example grasp that belongs to the category, is the mouse grasp, in which the hand pre-shapes as a mouse grasp, closes the fingers to stably hold the mouse with the first closing command, and any subsequent closing command will create the ’click’ motion of the index finger. The hand releases the mouse when an open command is sent.

3.2. Interface Implementation

31

XML files

Robo-Limb commands are formed by defining the digit which is going to be moved, the status of the digit, which can be idle, closing or opening depending on its desired state, and the duration of the time in milliseconds the finger is going to move. The predefined pre-grasps are saved as xml files of the form ( 3.1). Each xml file can contain commands for one to all the fingers.

< Digit1

iState = ‘1‘

iVelocity = ‘250‘ iStartTime = ‘0‘ iSendTime = ‘100‘/ > (3.1)

The two time variables in the XML command ( 3.1) are used to define when the digit starts moving (iStartTime) and how long this movement is going to hold in ms (iSendTime). The iStartTime is used to sort the commands to be sent sequentially and iSendTime is the time between the command that initiates the movement and the command that terminates it. For the execution of each pre-grasp we need three different xml files; one specifying the actual pre-shape of the hand, one with the parameters for the closing of the appropriate fingers and the third for the opening the fingers to a releasing pose. We implemented the XML parser that corresponds to this form of commands and, as soon as the object is recognized and the suitable pre-grasp is chosen, the corresponding XML files are loaded and executed automatically. The opening and closing commands are loaded and executed every time the EMG-controller sends a relative command to the fingers.

3.2.4

Final trigger

As shown in figure 3.2 the image processing node publishes the chosen pre-grasp that is mapped to the recognized object and this automatically triggers the pre-grasp execution. After the pre-shaping of the hand, the final stage to complete the grasp is the closing of the fingers around the object. In real life applications, the patient wearing the prosthesis reaches for the object and when he believes that the hand is close enough to grasp the object he uses myoelectric signals to command the closing motion around the object. A different muscle flexion that is mapped to the opening of the fingers is performed when the patient wants to release the object.

32

Chapter 3. Methodology

Since we do not have an EMG controller, for our experiments we implemented an EMG simulator that listens for a key pressed in the computer keyboard to send the command for opening or closing the fingers. The hand performs a predefined movement that is described by the xml form that was described in section and commands the movement of the fingers that are allowed to move in the specified pre-grasp. The EMG-simulator is only a substitute for the EMG controller. It does not allow the patient to control how long he wants to move the fingers or control the power and speed of the fingers’ movement.

3.3

Vision Control

The proposed controller consists of three states that correspond to the grasping phases described in the background (section 2.1, figure 2.1). The vision state is the first state in the pipeline of the pre-grasp procedure. In this one the image captured by the camera is processed until the object is recognized. The camera stream is taken as input and outputs the class in which the object belongs. For simplicity, due to the limited time of this project, we are going to use simple QR-codes to annotate each object and out-of-the-shelf algorithm to recognize the codes. This is used as a proof of concept that our semi-autonomous vision-based approach in the pre-grasp state leads to more intuitive and advanced control of the prosthetic limb. The vision algorithm can be replaced by a more sophisticated real-time image recognition on raw video data algorithm that recognises the objects and does not require the QR code notations.

3.3.1

Camera Position

The first issue was the mounting position of the camera. Different positions were considered including the inner part of the palm and the back of the hand in the base of the fingers. The latter was chosen as the most appropriate and the camera was mounted parallel to the axis connecting the wrist with the fingers, as shown in figure 3.4. This position allows an unobstructed view of the world while positioning it in the hand palm would result in occlusions because of the fingers. In order to have a clear view of the object in the case of positioning the camera on the inside, the fingers would have to be fully open during the approaching phase, which is not always possible. Moreover, approach-

3.3. Vision Control

33

ing an object with a fully open palm causes the hand to occupy more space, making it prone to collisions with obstacles and damage. The resulting pose does not seem natural, a parameter that is highly correlated to causing distress and abandonment of the prosthesis by the patient, as is suggested by Biddiss and Chau (2007).

Figure 3.4: In the picture the selected position of the camera is shown.

Regarding the natural and attractive look of the hand, it is important to also mention that the camera used for these experiments is a low cost webcam and the configuration shown in figure 3.4 should only be perceived as a simple prototype to investigate on the efficiency of a vision-based system on the pre-grasping stage. The result is very bulky and impractical to be used in everyday tasks. In case of incorporating this approach to a real product more discreet and lightweight solutions regarding cameras’ sizes should be investigated.

3.3.2

QR codes Object Recognition

After mounting the camera, we proceeded to the image processing and QR-codes based object recognition. Firstly, six commonly used everyday objects, listed in table 3.1 were chosen and each one of them was labelled with a distinctive numerical QR-code, that goes from 1 to 6. An example of a simple object notated with a QR-code is shown in figure 3.5a. Each of these codes is mapped to a different pre-grasp that is considered the most suitable to grasp the corresponding object. In the case of the key chain the corresponding pre-shape is the tripod, as shown in figure 3.5b. When the object to be grasped is chosen by the user, he moves the hand with the camera facing the desired object. The video data stream is processed using OpenCV OpenCV

34

Chapter 3. Methodology

(a) An example of using qr code on the key(b) The grasp used to hold the keys od the chain

tripod

Figure 3.5: The tripod grasp in open and close state. 3.3a shows the pre-grasp when first executed and 3.3b shows the formation of the fingers after a closing command has been performed.

(2014) and ZBar Zbar (2014) libraries and the QR code is decoded. Once the code is recognized, the object is known and is published via a ROS message to the pre-shaping execution state, which chooses which pre-shape configuration corresponds to this object and executes it.

3.4

Current Listener Controller

The second half of this project focuses on closing the loop by estimating the position of the fingers over time. As mentioned in the background section the Robo-Limb prosthetic hand is an open loop device with no proprioceptive feedback, meaning that it lacks the encoders that give proprioceptive feedback. This introduces limitations on the control of the hand, since at any specific time the position of the fingers is unknown, thus the transition of one pre-shape to the other is not possible in a straight-forward and natural looking manner.

3.4.1

Open-loop Solutions

One solution that is also used by Touch Bionics, in order to assert a correct operation of the hand, in an open loop manner, is to fully open the fingers, before moving them into the desired positions defined for each pre-shape. The fingers, as shown in figure 3.6, are constructed as two links connected in one joint and, as a whole, connected on the hand and controlled by a single motor situated in the base of the finger. The motor controls the movement of the link that is closer to

3.4. Current Listener Controller

35

the palm (proximal phalanx link). For the motion of the upper link (distal phalanx link) a tendon-like mechanism with strings connected with small rollers in the base of the finger is used. The other end of the string is attached to the distal link. When the motor moves the proximal link, the strings are pulled and the finger flexes in a motion very similar to the one by a human hand. This control mechanism offers a stiff grasp in the closing motion, but the distal links of the fingers have no force power in the opening motion.

(a) Finger links

(b) Inner mechanism

Figure 3.6: The Robo-Limb digit consists of two links, the proximal phalange link and the distal phalange link, as shown in 3.6a. Figure 3.6b shows the inner tendon-like mechanism of the finger. Figures taken from Belter et al. (2013).

So, unless the hand is forcefully held against an obstacle on the external side that locks the lower half of the fingers while they open, the fingers will perform the opening motion and start their pre-shaping movement from a known position. Since the initial position is known, the command corresponding to the pre-shape is defined by closing commands of specific time for the fingers that need to close. The rest of the fingers that are open in the pre-shape can be ignored. This solution works well in the alternation between different pre-shapes but it is neither efficient nor does it seem natural. The time needed to complete the pre-shape, thus the time the patient waits for the hand to take the appropriate shape before using it, gets significantly bigger (see section 4 for a numerical comparison). In our attempt to reduce the time needed to change between different pre-shapes, we pre-process the xml commands (see section 3.2.3 for the commands form and use) before they are sent to be executed. Each finger is treated independently and if it is

36

Chapter 3. Methodology

commanded to a fully open or fully closed position, overly sufficient commands of the desired motion were sent, in order to ensure that the finger will end up in the desired open or close position. For the more precise movement of the fingers to a desired position that is not an open or close one we cannot avoid to perform the previous sequence of firstly fully open the fingers and then move them to the precise position. This solution deals with the natural look of the hand when it transitions between different pre-shapes, but does not provide position information. Out proposed way of closing the loop involves monitoring the current from the Robo-Limb via the CAN connection and use it to extract proprioceptive feedback. The procedure we followed is described in the following part of this section.

3.4.2

Current patterns

First we needed to see if there were significant patterns in the current that could be used to extract the amount of time each command holds and recognize possible collisions of the fingers with obstacles. After performing opening and closing movements of the fingers firstly without and then with obstacles, four different patterns of the current levels are observed and presented in figure 3.7. No motion of the finger means that the current feedback equals to zero (figure 3.7a) as expected. When the finger moves for a specific time the current levels are getting bigger than zero. Figure 3.7b shows the current activity of the finger while it moves. The oscilations correspond to an opening motion executed by the finger. All the free from obstacles movements have this specific pattern; the current levels get high very fast but don’t stay in this level for long and fall in a low, but non-zero, magnitude. When an obstacle is presented after the finger has covered some distance, then we see a behavior similar to figure 3.7c, where suddenly the current levels are steeply raised in significantly high levels, higher than the once corresponding to the beginning of the obstacle-free motion, and remain in this high magnitude level for as long as the obstacle is present. Finally, in the last figure 3.7d we see the current activity when the finger does not move at all because there is an obstacle preventing it from moving.

3.4.3

Extracting time

Our next hypothesis was to find whether we could extract time information of the current patterns. As mentioned in the previous section when the hand is still, the

3.4. Current Listener Controller

37

(a) No motion.

(b) Motion without obstacles

(c) Motion with obstacle

(d) Fully obstructed motion

Figure 3.7: Illustration of the different current load patterns for the index finger in four different states of motion. Figure 3.7a, shows the current load when the finger stands still. In figure 3.7b, the finger completes its movement without colliding with any object. The sudden rise on the current levels on 3.7c indicate the presence of an obstacle in the second half of the finger’s motion. Finally, figure 3.7d, shows similar oscillations as figure 3.7b, but these happen in a higher current level, than 3.7b, indicating that the finger did not move at all because of the existence of an obstacle on its path.

current feedback equals to zero. Moreover, the time delay between two consecutive movements of the same digit is at least 500 milliseconds. So, we can isolate each finger movement by calculating the time the oscillations in the current hold between two regions that the current levels are equal to zero for a long period of time in milliseconds.

Calibration The first step before starting measuring the time, is a calibration process to acquire the timing and angular parameters related to the specifications of the hand. The calibration step are the following: 1. Measure the angle each proximal link of the finger can cover, based on the specifications of the hand.

38

Chapter 3. Methodology

2. Measure the temporal distance from a fully open to fully closed position and vice versa. This is succeeded by sending commands with time parameters 1.5 times greater than the time specified by the company as necessary to cover the full open-to-close distance. When the finger reaches the terminal open/closed position, the current levels raise, as if it has hit an obstacle. Thus, the time needed for a full motion of the finger is the estimated unobstructed time reported by the current listener, when the extreme commands are sent. Specifically, the current listener monitors the current levels for disturbances. When there is some significant activity on the current levels, it measures the time that takes, until the current equals to zero again. This is perceived as the command time. The same process is used for estimating the time an obstacle is present and obstructs the finger’s motion. The current listener monitors the current levels and searches for the current activity over a threshold that indicates the presence of an obstacle. The time the current levels are over this threshold is recorded until they fall again under the threshold, which means that the obstacle has been removed from the finger’s path. After the movement of the finger is completed, the time during which the finger was in contact with an obstacle is subtracted from the overall command time, giving us an estimation of the unobstructed time of the fingers’ motion. The table 3.2 shows the time it takes for each finger to fully close from an open position and to fully open from a closed position, when the commands are send with the maximum available speed parameter. Knowing the full time that is needed by the finger to cover the distance open-to-close, and the time the finger moves unobstructed during a command, we can estimate the time the finger needs to fully open or close. The measured times relation is shown in figure 3.8. It is important to mention that throughout all our experiments the velocity the finger moves is the same and equals to the maximum available one by the hand’s specifications. Different velocity requires different amount of time to fully move the finger from open to close position and vise versa.

3.4.4

Angular distance

Until now all our calculations are in the time space. Since the commands we send to the hand are only specifying time and state (for information about the commands, see section 3.2.3), the time estimation should be enough for the hand’s control. For

3.4. Current Listener Controller

39

open close angle thumb flexion/extension

0.99

0.99

77◦

thumb abduction/adduction

1.08

1.08

100◦

index finger

1.0

1.02

75◦

middle finger

1.04

1.02

76◦

ring finger

0.9

0.9

74◦

little finger

0.9

0.92

70◦

Table 3.2: Times to fully open and fully close the fingers in seconds, and the angle each finger can move in degrees.

Figure 3.8: Relation between the three estimated times: 1. open-to-close, 2. time left to close and 3. unobstructed motion time, which equals to the time left to open.

evaluation purposes and in order to present the results in a more intuitive metric we translate the time the finger moved unobstructed to angle travelled by the finger. The angle each finger can cover during its motion, from fully open to fully closed position, is known by the specifications of the robot. The absolute time for a full (open to close) motion of each finger is calculated as presented in section 3.4.3. Having these two parameters, the angle and the time, we can map each time to a specific angle with respect to the fully open position of the finger. We use the angular information we calculate here, in order to have a metric that can be compared and evaluated with the polhemus tracker, as explained in section 4. The experimental procedure to validate our methods is presented in chapter 4 and the results are shown and discussed in chapter 5.

40

Chapter 3. Methodology

Figure 3.9:

Chapter 4 Experimental Evaluation In this project we test two different ways of improving the performance of the prosthetic hand Robo-Limb; one by mounting a camera and test the performance of a vision-based system that autonomously chooses and performs the suitable pre-shape for the recognized object and one by closing the loop in the control of the hand, by providing feedback about the fingers’ position. For the evaluation of our proposed systems three different experiments on recognizing, grasping and moving the objects were held.

Experiment I The first experiment is used to quantitatively evaluate the performance of our current listener. The estimated temporal distance travelled by the finger is translated to the angular space as described in section 3.4.4 and compared with the actual angle that each digit has travelled. In order to measure the absolute, real position of the fingers in space, we used a Polhemus Liberty position tracking device.

Polhemus Liberty Tracking System

The Polhemus Liberty tracking system consists of a system electronics unit, one source and four sensors (figure 4.1) and it calculates the distance and rotation in the 3D space of each sensor with respect to the source. The sensors are attached to the different digits, as shown in figure 4.2 and the angle each digit has travelled is calculated with respect to the square source. 41

42

Chapter 4. Experimental Evaluation

Figure 4.1: The Polhemus Liberty tracking system device with eight sensors and the cube source.

Figure 4.2: This figure shows the position of two polhemus sensors on the hand

Using the tacking device, positioned on point B on figure 4.2 we get real time (x1 , y1 , z1 ) coordinates for the position of the index finger. We use the second sensor situated on the base of the index finger (Point O, in figure 4.2) to move the origin of our position calculations from the cube source to this specific point on the hand. Then we project the origin O, to the rotation axis of each finger. For the thumb flexion/extension movement that is shown in figure 4.2, the rotation axis is in point A. Now we can calculate the actual angular distance the finger has covered while moving from point B to point C. The angle between two 3D vectors is calculated by finding the dot product. For two vectors B = (x, y, z) and C = (x, y, z) the angle between them is defined by the

43

equation 4.1. θ = arccos

B·C | B || C |

(4.1)

In the full experiment, we used two tracking sensors, one positioned on the thumb, in order to monitor the flexion/extension motion, and the second sensor was positioned on the proximal phalanx of the index finger (Point D, of the figure 4.2). Due to the big size of the polhemus position sensors, we could not use them in digits that move side by side, such as the index and middle finger, since the sensors were colliding with and obstructing the motion of the fingers. For our experiment we commanded the hand to perform a sequence of 450 preshapes, randomly chosen from the nine pre-shapes, listed in table 3.1, and tracked each finger’s movement. After each command is completed, we calculate the difference between the angle calculated by the current listener, based on the estimated unobstructed time the finger moved, and the angle measured using the polhemus. The experiment was repeated, including an obstacle that obstructs the motion of the index finger, as shown in figure 4.3. The results are discussed in the following section.

Figure 4.3: The set-up of the first experiment with the polhemus tracking device, including an object that blocks the index path.

Results This experiment was held, in order to calculate the accuracy of the closed loop current listener over time. Histogram 4.4a presents the frequency of each angle difference in degrees over the experiments of the 450 pre-grasps for the index and the thumb fingers.

44

Chapter 4. Experimental Evaluation

(a) Open tripod pre-grasp pose

(b) Close tripod pre-grasp pose

Figure 4.4: Histogram 4.4a presents the comparison of frequency of the angle differences between the index and thumb finger. Histogram 4.4b, compares the angle differences between an obstacle-free movement of the index finger and an obstructed one.

What we observe is that the biggest volume of angular error is between zero to 4 degrees, in the case of both fingers. The thumb finger appears to be slightly less accurate than the index, based on the observation that is have greater error rate than the index finger in all the angle differences that are higher than 1 degree. The angle difference is indicative of the accuracy of our angle estimation. The overall accuracy of each finger, is shown in the next table finger

accuracy (no obstacle) accuracy (with obstacle)

thumb

80.5%

-

index

91.8%

64.5%

Table 4.1: Table showing the accuracy for each finger with and without the presence of obstacles, during the polhemus experiment.

The thumb has significantly lower accuracy than the index, as was expected by the observation that the thumb’s position estimation is more often wrong than the index, as it indicated in histogram 4.4a. The big difference between the two fingers in the obstacle-free case, could be explained by some sporadic failure of the device, ending in repeated oscillations in the current levels, even when the finger was standing still. Although the current listener for the unobstructed motion succeeds high accuracy specifically for the index finger, we observe a big fall in the obstructed case. Finally, we compare the average angle difference between the opening motion and the closing motion and we observe that the closing motion tends to be more wrong than

45

the opening one. This trend can be indicative of the difference in the times to open and close as estimated by the current listener (table 3.2).

Figure 4.5: Average angle difference with respect with the opening and closing motion of the thumb and the index with and without obstacle

Overall the angle differences in the unobstructed case correspond to distance difference of 6 − 9mm for the index fingerpoint and 5 − 12mm for the thumb fingerpoint. In the case of obstacles in the path of the fingers the end-effector estimation can be off by a distance of 5cm, from its actual position. This quantitative estimations are going to be evaluated empirically also by the following experimental procedure.

Experiment II The second experiment is a pick-and-place experiment where the nine objects, listed in table 3.1, are labelled with the QR codes and positioned on a table as shown in 4.6. The task is to reach, pick-up and release the target object and the different steps of this process are presented in the finite state machine of figure 4.7. The subject every time returns the hand in an initial position on the table before continuing with the grasping of the next object. The hand starts from a fully open pose in the beginning of the experiment. A computer message prompts the subject to grasp an object randomly chosen from the list of nine objects and the subject is instructed to approach the target object with the camera facing the QR code. When the QR code is recognised the pre-shape is

46

Chapter 4. Experimental Evaluation

Figure 4.6: Experimental set-up for the pick-and-place experiments

automatically triggered and a computer alarm will notify the subject that the hand has executed the pre-shape. The next step is to pick-up the object. The subject moves the hand towards the object until he considers it to be close enough to grasp it. The final grip is commanded using the keyboard to send the closing trigger. After the hand is closed the subject is asked to hold the object and move it in a different position on the table and then release it, sending n opening EMG-simulator signal using the keyboard. The subject waits for the observer to press a button, indicating the success or failure of the grasp for this object. Success is defined as completing the task including moving the object to a new

Figure 4.7: fsm

47

position. The experiment for the single object grasping is completed when the hand is returned to the initial position. For evaluating the performance of the vision-based system, we measured 1) the accuracy of the object recognition, 2) the execution time until the QR code is recognised and 3) the success of the task as described before. As was expected the accuracy of the classifier is 99.98%, with the 0.02% error occurring due to the experimenter pointing towards a wrong object than the one the program asked him to grasp. This is not of significant importance, since QR codes that are not damaged are always correctly and uniquely identified. What we were actually interested was the amount of time it takes between the decision phase, when we decide which object to grasp, and the recognition of the QR code. The mean time and for each subject is shown in figure 4.8, and appears to be between 1.65 seconds to 2.16.

Figure 4.8: Average time passed between the computer prompt and the actual grasping by each subject.

We would normally compare this time with EMG related techniques to command the pre-grasp, in order to have a comparison on a real life system, but we didn’t have access to such information. What we can immediately commend on, is that the amount of objects the subject can grasp, without thinking about the execution of the pre-grasp, but only by moving the hand towards the object, is almost double the amount the clinician by Touch Bionics Goodwin (2014) suggested as maximum possible to perform by a real patient. Furthermore, there is no reason to believe that there is any limitation on the amount of pre-grasps that can be correlated with QR codes, other than the amount of objects one can label with this code size.

48

Chapter 4. Experimental Evaluation

Although the success rate of this project is very good, we do not believe that it is a feasible real life scenario to use QR codes for every object one uses in his everyday life. The vision-based controller limitations are discussed more extensively in section 5.2.

Experiment III The same experimental procedure is used to evaluate the performance of the current listener in the estimation of the time each finger has moved unobstructed, but now we focus on the transition between the different pre-grasps. Our controller uses its internal real-time estimation on the times needed to open and close the fingers to pre-process every CAN command before it is published to the PCAN-BUS. As mentioned before the commands are defined as the temporal distance from the fully open position. The outcome metrics used to evaluate the closed-loop control of the hand are 1) the correct execution of the pre-shape, 2) the suitability of the pre-shape to grasp the object and 4) the overall execution time of each grasp. Before running the closed loop controller we ran the full experiment with the two open loop solutions, in order to establish a baseline on the timing that is needed to transition between the different pre-shapes by the open-loop solutions. Three subjects were asked to perform the following two-phase experiment.

Part I The first part of this experiment is the ‘training phase’, where the subject gets acquainted with the procedure and the different pre-grasps. The subject is asked to simply transition between different pre-shapes without performing the actual grasping of the objects. The sequence of the objects the subject is asked to grasp is randomly chosen, with the constraint of grasping each object 10 times. The execution of this phase ends when the hand is shaped accordingly to the recognized QR code, in the ‘hand preshaping’ stage of the finite state machine of picture 4.7. This part of the experiment is used to evaluate the obstacle-free estimation of the fingers’ position.

49

Part II After the training phase the subject is asked to perform the complete cycle of the control, including the grasping, moving and releasing of the object. Similarly to the previous part of the experiment, the subject is asked to grasp each object 10 times, in a random order. Now, after the hand pre-shapes, the subject is asked to perform the whole pick-and-place task and is free to close and open the fingers as many times as he wants. After the object is released the observer evaluated the performance of the grasp in terms of transition success and stability, and the subject is asked to proceed with the next object.

Results From these experiments we extract the time it takes to complete the full pre-grasp from the moment the command is sent to the PCAN-BUS, by the time the command is executed. Histogram 4.9 shows the average time to grasp each object in three cases, 1) the open loop control with the initial solution of fully opening every digit before executing each command, 2) the open loop solution of pre-processing the commands and only fully opening the fingers that are going to be moved in an intermediate position, while sending extreme opening or closing commands to the fingers that were supposed to end either open or close respectively, and 3) the closed loop case, that commands a movement of the fingers that simply covers the distance between their current position and the desired one. We notice that there is a significant reduction of the time between the first and second open-loop solutions and between the open-loop solutions and the closed loop one. Figure 4.10, presents the average time needed to perform a full pre-grasp for the three approaches, over all the experiments we held. The same trend with our previous observations is noted; the most time consuming approach is the open loop which opens the hand every time before executing the actual grasp and the less consuming one is the closed loop solution. Table 4.2 shows the percentage reduction on the execution times, between the different approaches. The pre-processed solution reduces the overall execution time in more than a half, with respect to the original solution involving fully opening all the fingers before pre-grasping. The closed loop approach is steadily the best of all, in all

50

Chapter 4. Experimental Evaluation

Figure 4.9: Histogram showing the average time needed for execution of a pre-grasp with respect to the object that is going to be grasped, in the three controller approaches; the open loop solution with first fully opening the hand, the pre-process open loop solution, and the closed-loop using the current feedback.

Figure 4.10: Comparison of the time needed to perform a full grasp, between the three different control approaches.

the pre-grasps, and the execution time reduces with respect to the original execution time by 71.64% and by 40% with respect to the pre-processed solution. Regarding the accuracy in the execution of the pre-grasps by using the closed loop controller we ran the experiment for a consecutive 900 transitions between the different objects, without performing the final grip of the objects, and every pre-shape was performed correctly. This is not the case, when the subjects are free to use opening and closing com-

51

solution 1 solution 2 solution 2

52,58%

-

solution 3

71.64%

40.2%

Table 4.2: Table showing the percentage of time reduction between the three different controller approaches.

mands. The suitability of the pre-grasp execution falls to 97.92%, over the first 45 transitions and to 93.7% over the next 45 transitions. The correct transitions between pre-grasps are even worse, since some times the estimation of the position of some fingers does not affect the success of grasping the object, but the pre-shape is not fully correct, as specified in the configuration files. The accuracy on the transitions falls to 94.44%, over the first 45 transitions and to 85, 7% over the next 45 transitions. This result in the accuracy was expected based on the polhemus experiment results, where a big inaccuracy of our current listener performance in the case of obstacles was also noticed. Further investigation on the obstacle detection based on the current levels is necessary.

Chapter 5 Discussion and Conclusions This project thus far has been quite successful in both elements of investigation towards the improvement of performance of Robo-Limb; the visual-based controller and the proprioceptive sensor feedback. This chapter summarizes the contributions of our work, discusses the limitations present in our approach and proposes ideas for future expansion on the investigated areas.

5.1

Contribution

This project can serve as a proof of concept on the performance improvement suggestions we are investigating.

Vision System Regarding the vision system that recognizes the QR codes and performs automatically the pre-grasp, we provide the full description of the different components of a general system that recognises the object, selects and executes a pre-shape and manipulation the object accordingly to its purpose. Our system was fully implemented as a ROS package, with the prosthetic hand, the camera, the image recognition algorithm and the execution of the pre-shapes being different nodes of this package. This serves as a base, allowing an easy substitution of each of these nodes, with a different one, for instance an alternative image processing algorithm or even a completely different robotic hand. We did not have the opportunity to perform experiments with real patients, and this would be an interesting investigation, since they are the only ones able to comment on 53

54

Chapter 5. Discussion and Conclusions

the ease of use and reduction on the mental effort needed in the control of the hand, using this proposed system

Closed loop controller With respect to our work on providing proprioceptive feedback to the hand, the results clearly show the superiority of a closed loop system as opposed to an open loop one, especially on the overall time of the grasp. Our work comes in agreement with previous work emphasising the advantage of closed loop control in modern prostheses. We identified the current patterns for different motion of the digits of the prosthetic hand and implemented a system that estimates the distance travelled by the hand in time and angle space. The controller has a similar performance with the open loop controller in the transition of the pre-grasps. The performance fails significantly when there are obstacles in the finger’s path. The controller area network (CAN) is used in other prosthetic hands besides RoboLimb, including the University of New Brunswick hand Losier et al. (2011) and ELU2 Hand (2014). We established a procedure of calibrating and processing the feedback through this connector, in order to extract temporal information about the prosthetic hand’s motion.

5.2

Limitations

Vision system As stated from the beginning of our approach, the vision system based on QR code recognition is purely for proof of concept that a system that integrates a camera can improve performance in specific areas that the surface electromyography techniques fail. Thus, the vision controller phases significant limitations. • Firstly, labelling all the objects one uses in everyday life is neither practical, nor intuitive. The patient is limited to use the vision functionality of the prostheses only inside his residence or in controlled environments he has access and permission to label objects. • The size of QR codes is also a limiting parameter. For objects, like spoons and forks the QR codes can only be positioned on the handle, which is very narrow. A significant reduction to the QR code must occur to fit it on the fork handle.

5.2. Limitations

55

We have noticed though, that the distance in which the QR codes are able to be recognised is inversely proportional to size. This means that, in order to recognize the object, the patent has to move the hand very close to the QR code, and hold it there for long, a behavior that deviates from the intuitive feeling we try to achieve. • The size of the QR codes directly relates to the amount of information that can be effectively encoded. As the amount of objects to be labelled rises, bigger QR codes are required, but this consequently limits the use of the QR codes to smaller objects, such as the fork. • Such as every application involving a camera, our system is highly dependent on the lighting conditions of the environment. The QR codes deal well with slight variation of light and succeed fast, orientation invariant recognition, but if the QR algorithm is going to be substituted by a raw data recognition algorithm, then parameters, such as the light conditions of the environment, the orientation of the object and the possible occlusions by objects blocking the camera view need to be considered. • Finally, the one-to-one pre-grasp to object mapping does not allow to perform different tasks with the same object. For example, if the patient wants to grasp a cup he only can use the cylindrical grasp, even if he just wanted to move the cup using its handle.

Closed loop controller Numerous assumptions have been made regarding the proprioceptive feedback approach. • Firstly, we assume that the velocity by which the fingers move is constant throughout all our experiments. In a real life scenario this is not the case, since the patient uses EMG signals for opening and closing the fingers, which is not constant but proportional to the muscle power. • The controller monitors the magnitude of the current to estimate the time left for opening/closing the digit, thus depends on the battery condition. All the experiments were performed using a fully charged battery.

56

Chapter 5. Discussion and Conclusions

• The ability of the current listener to estimate correctly the distance travelled by the fingers is also affected by the material density of the object. Specifically when the hand comes in contact with soft objects, or even a human hand the current levels are the same as if it is an obstacle, but it can still move slightly, since the material does not fully stop its motion. Thus, the estimation about the time it moved unobstructed is far from the real value.

5.3

Future Work

Vision System Regarding the vision system numerous object recognition algorithms exist, that could substitute the QR recognition and classify the objects as they are captured by the camera. With the QR codes, the patient cannot grasp an object if it not labelled, even if it is identical to one that he has labelled and used. A classification algorithm, as long as it is trained to a specific category of objects, should be able to allow the patient to grasp different objects of the same class. Moreover, sophisticated image processing can offer information about the size and shape of the object, which can be used to adjust the hand accordingly. Even if the object is of an unknown class that the algorithm is not trained on, it can attempt a grasp utilizing information about object properties, as it is also suggested by Saxena et al. (2008). One very interesting application would be to implement a Bag-of-Words method as described by Csurka et al. (2004), using SIFT features, in order to ensure orientation and scale invariant recognition. The method involves extracting features of the classes from a training and create a vocabulary of features by clustering the keypointdescriptors. A Bag-Of-Words descriptor is a histogram of vocabularies, corresponding to each image. These BOW descriptors from training images can then be used to train a SVM classifier. This is also the direction of a project by a colleague, Hoppe (2014).

Closed loop Control The current listener has many limitations as it is now, when it comes to deal with collision with objects. Machine learning solutions regarding pattern recognition in signal activity has been implemented by Bousaleh et al. (2012), in order to detect and classify signal disturbances on an electric power signal, while Khezri and Jahed

5.4. Conclusion

57

(2007) have trained neural networks, in order to recognize hand movements by the EMG signals generated by the muscles. One interesting extension would be to also try and classify between the object’s material density properties.

5.4

Conclusion

Overall, regarding the vision-based approach on the pre-grasping we can conclude that the proposed semi-autonomous system can improve the patients life by providing a wider range of functionalities, that are available by the prosthetic hands, but currently the patient cannot fully exploit. More sophisticated object recognition algorithms should be evaluated with the current system. As for the closed-loop controller, we proved that proprioceptive feedback reduces the amount of execution time to less than a half, compared to the initial open loop solution most companies employ. The reduced accuracy of the current listener controller should not be seen as a drawback, rather than a motivation for integrating position encoders in the design of current prostheses devices.

Appendix A Available pre-grasps by Touch Bionics (2014)

59

60

Appendix A. Available pre-grasps by Touch Bionics (2014)

3.3.1 Features Click on the features icon to enter the features. The features are the hand positions and grips that can be programmed onto your i-limb ultra revolution and triggers are the muscle signals you give in order to enter the feature.

Precision Pinch Grip Options Precision pinch grip options are best for picking up small items between and the thumb and index finger. There are 4 options available depending on how you want the other digits to perform while doing the pinch. The most popular is Thumb Precision Pinch Closed.

Feature

Picture Example

Description

Use

Task Examples

Middle, ring and little fingers remain fully opened and switch off. Both index finger and thumb will move to provide grip.

Allows for a wider opening than thumb precision. Aids with visualization or for pinching objects where the non-active digits may get in the way.

1. Returning cards or money to wallet 2. Picking up napkins 3. Folding laundry

Middle, ring and little fingers remain fully opened and switch off. Thumb automatically moves to a partially closed position. Only index finger will move to provide grip against the fixed thumb.

Accuracy is improved when picking up an object by allowing you to place the thumb against the object to be pinched. Only the index finger moves to grasp the object. Ideal for repetitive tasks.

1. Pick up pencil or slim, long objects 2. Thread needle 3. Sort/Pick up medications

Standard Precision Pinch Closed

Middle, ring and little fingers automatically close and switch off. Both index finger and thumb will move to provide grip

Will allow for better visualization in some tasks, especially when the working surface is not at eye level.

1. Slide small object from shelf over head 2. Pick up small object from floor

Thumb Precision Pinch Closed

Middle, ring and little fingers automatically close and switch off. Thumb automatically moves to a partially closed position. Only index finger will move to provide grip against the fixed thumb.

Can improve accuracy for picking an object by allowing you to place the thumb against the object to be pinched and only the index finger moves to grasp the object. Ideal for repetitive tasks.

1. Pick up and open sugar packet from a coffee stand 2. Pick up coins 3. Alternative way to tie shoes (also see “lateral grip”)

Standard Precision Pinch Opened

Thumb Precision Pinch Opened

Part number: MA01141 Issue No. 1, April 2013

13 of 37

61

62

Appendix A. Available pre-grasps by Touch Bionics (2014)

63

64

Appendix A. Available pre-grasps by Touch Bionics (2014)

Bibliography Adriano de Oliveira Andrade, A. B. S. (2001). Emg pattern recognition for prosthesis control. Proceedings of the 16th Brazilian Congress of Mechanical Engineering (COBEM 2001), Bioengineering. Baxter

(2014).

http://www.rethinkrobotics.com/products/

baxter-research-robot/. BeBionic (2014). http://bebionic.com/the_hand. Belter, J. T., Segil, J. L., Dollar, A. M., and Weir, R. F. (2013). Mechanical design and performance specifications of anthropomorphic prosthetic hands: a review. Journal of rehabilitation research and development, 50(5):599618. Biddiss, E. and Chau, T. (2007). Upper-limb prosthetics: critical factors in device abandonment. American journal of physical medicine & rehabilitation / Association of Academic Physiatrists, 86(12):977–87. Bionics, T. (2014). http://www.touchbionics.com/. Bitzer, S. and van der Smagt, P. (2006). Learning emg control of a robotic hand: towards active prostheses. In Robotics and Automation, 2006. ICRA 2006. Proceedings 2006 IEEE International Conference on, pages 2819–2823. Bosch

(2014).

http://www.kvaser.com/software/7330130980914/V1/

can2spec.pdf. Bousaleh, G., Darwiche, M., and Hassoun, F. (2012). Pattern recognition techniques applied to electric power signal processing. In Sciences of Electronics, Technologies of Information and Telecommunications (SETIT), 2012 6th International Conference on, pages 809–813. 65

66

Bibliography

Brochier, T., Spinks, R. L., UMILTA, M. A., and LEMON, R. N. (2004). Patterns of muscle activity underlying object-specific grasp by the macaque monkey. Journal of Neurophysiology. Carmena, J. M. (2013). Advances in neuroprosthetic learning and control. PLoS Biol, 11(5):e1001561. Castellini, C. and van der Smagt, P. (2009). Surface emg in advanced hand prosthetics. Biological Cybernetics, 100(1):35–47. Csurka, G., Dance, C. R., Fan, L., Willamowski, J., and Bray, C. (2004). Visual categorization with bags of keypoints. In In Workshop on Statistical Learning in Computer Vision, ECCV, pages 1–22. Cutkosky, M. (1989). On grasp choice, grasp models, and the design of hands for manufacturing tasks. Robotics and Automation, IEEE Transactions on, 5(3):269– 279. Cyberhand (2014). http://www-arts.sssup.it/Cyberhand/introduction/. Dosen, S., Cipriani, C., Kosti´c, M., Controzzi, M., Carrozza, M. C., and Popovi´c, D. B. (2010). Cognitive vision system for control of dexterous prosthetic hands: experimental evaluation. Journal of neuroengineering and rehabilitation, 7:42. Feix, T., bodo Schmiedmayer, H., Romero, J., and Kragi, D. (2009). A comprehensive grasp taxonomy. In In Robotics, Science and Systems Conference: Workshop on understanding the human hand for advancing robotic manipulation. Fligge, N., Urbanek, H., and van der Smagt, P. (2013). Relation between object properties and {EMG} during reaching to grasp. Journal of Electromyography and Kinesiology, 23(2):402 – 410. Goodwin, A. (2014). private communication. Prosthetist, Clinic Manager at Touch Bionics. Gscam (2014). http://wiki.ros.org/gscam. Hand, E.-. (2014). http://www.elumotion.com/Elu2-hand.htm. Hoppe, S. (2014). private communication. Colleague.

Bibliography

67

Jeannerod, M. (1984). The timing of natural prehension movements. Journal of Motor Behavior, 16:235–254. Khezri, M. and Jahed, M. (2007). Real-time intelligent pattern recognition algorithm for surface emg signals. BioMedical Engineering OnLine, 6(1):45. Kuiken, T., Lowery, M., and Stoykov, N. (2003). The effect of subcutaneous fat on myoelectric signal amplitude and cross-talk. Prosthetics and orthotics international, 27(1):48–54. Kumar, D. K. (2013). Robotics to the rescue: Prosthetic hands help amputees in developing countries. Losier, Y., Clawson, A., Wilson, A., Scheme, E., Englehart, K., Kyberd, P., and Hudgins, B. (2011). An overview of the unb hand system. Michelangelo, O. (2014).

http://www.living-with-michelangelo.com/gb/

home/. Napier, J. R. (1956). The prehensile movements of the human hand. In In: Hand and Brain. The Neurophysiology Soechting, J.F. and Flanders, M. Flexibility and repeatability of finger and Psychology of Hand Movements. OpenCV (2014). http://opencv.org/. Quigley, M., Conley, K., Gerkey, B. P., Faust, J., Foote, T., Leibs, J., Wheeler, R., and Ng, A. Y. (2009). Ros: an open-source robot operating system. In ICRA Workshop on Open Source Software. Reed, I. S. and Solomon, G. (1960). Polynomial codes over certain finite fields. Journal of the Society for Industrial and Applied Mathematics, 8(2):pp. 300–304. ROS (2014). http://www.ros.org//. Rosenbaum, D. A. (2009). Human motor control. Academic Press. Sahbani, A., El-Khoury, S., and Bidaud, P. (2012). An overview of 3d object grasp synthesis algorithms. Robotics and Autonomous Systems, 60(3):326 – 336. Autonomous Grasping. Santello M, S. J. (1998). Gradual molding of the hand to object contours. J Neurophysiol.

68

Bibliography

Saxena, A., Driemeyer, J., and Ng, A. Y. (2008). Robotic grasping of novel objects using vision. The International Journal of Robotics Research, 27(2):157–173. Schettino L.F, A. S. and H., P. (2003). Effects of object shape and visual feedback on hand configuration during grasping. Exp Brain Res. Schlesinger, G. (1919). Der mechanische aufbau der knstlichen glieder. In Borchardt, M., Hartmann, K., Leymann, Radike, R., Schlesinger, and Schwiening, editors, Ersatzglieder und Arbeitshilfen, pages 321–661. Springer Berlin Heidelberg. Schweitzer, W. (2009). Technical Below Elbow Amputee Issues - Tech bits III prosthetic hands. Selpho, W. (1857). William selpho. US Patent 18,021. Soares, A., Andrade, A., Lamounier, E., and Carrijo, R. (2003). The development of a virtual myoelectric prosthesis controlled by an emg pattern recognition system based on neural networks. J. Intell. Inf. Syst., 21(2):127–141. Supuk, T., Bajd, T., and Kurillo, G. (2011). Assessment of reach-to-grasp trajectories toward stationary objects. Clinical Biomechanics, 26(8):811 – 818. Taylor, C. L. and Schwarz, R. J. (1955). The anatomy and mechanics of the human hand. Artif Limbs, 2(2):22–35. Thurston, A. J. (2007). Par and prosthetics: The early history of artificial limbs. ANZ Journal of Surgery, 77(12):1114–1119. Vincent (2014). http://vincentsystems.de/en/. Wing, A. M., Turton, A., and Fraser, C. (1986). Grasp size and accuracy of approach in reaching. Journal of Motor Behavior, 18(3):245–260. PMID: 15138146. Zbar (2014). http://zbar.sourceforge.net/. Zheng, J., De La Rosa, S., and Dollar, A. (2011). An investigation of grasp type and frequency in daily household and machine shop tasks. In Robotics and Automation (ICRA), 2011 IEEE International Conference on, pages 4169–4175.

Suggest Documents