IN VIRTUAL ENVIRONMENTS

OBJECT TRANSPORTATION AND ORIENTATION IN VIRTUAL ENVIRONMENTS Yanqing Wang M.A.Sc.. Technical University of Novu Scotio. 1994 B.Eng.. Dalian Univers...
Author: Roy Cross
2 downloads 2 Views 7MB Size
OBJECT TRANSPORTATION AND ORIENTATION IN VIRTUAL ENVIRONMENTS

Yanqing Wang M.A.Sc.. Technical University of Novu Scotio. 1994

B.Eng.. Dalian University of Technology, 1982

THESIS SUBMITTED IN PARTIAL FULFILMENT OF

THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY

in the School of Kinesiology

OKnqing Wilng 1999

SIMON FRASER UNIVERSITY December, 1999 All rights reserved. This work may not be reproduced in whole or in part, by photocopy or other means, without permission of the author.

Library I*lofNational Canada uisitions and "1 Bib iographic Services

BibliotT du Cana

nationale

Acquisitions el semices bibliographiques

The author has granted a nonexclusive licence allowing the National Library of Canada to reproduce, loan, distribute or sell copies of this thesis in microform, paper or electronk formats.

L'auteur a accorde une licence non exclusive pernettant a la Bibliotheque nationale du Canada de reproduire, prtter, distribuer ou vendre des copies de cette these sous la fonne de microfiche/film, de reproduction sur papier ou sur format electronique.

The author retains ownership of the copyright in this thesis. Neither the thesis nor substantial extracts from it may be printed or otherwise reproduced without the author's permission.

L'auteur consewe la propriete du droit d'auteur qui protege cette Wse. Ni la these ni des extraits substantiels de celle-ci ne doivent Stre imprimes ou autrement reproduits sans son autorisation.

Abstract

Four experiments were conducted to investigate human performance on object manipulation in virtual environments. Experiment 1 established the structure of the ubjtx~L I + ~ ( I I S ~ C ) ) C I ~ ~ ~ ~ C ) I ~

illd orie~liuiionprwesses

in terms of concumnce and

interdependence. Experiment 2 examined the size effects of controller, cursor and target. Experiment 3 evaluated the effect of orientation disparity between the cursor and the controller. Experiment 4 identified the roles of contextual huptic and visual constmints on object manipulation. A novel structur~lmodel was developed for this research on object trmsponation

and orientation. It was found that the object transportation and orientation processes hod o parallel, interdependent structure.

The object transportation process contained the

object orientation process and therefore was the critical path for the task completion time. This structure was persistent over all experimental conditions for all four experiments. Object manipulation performance was the result of the interplay between haptic and visual information presented in virtual environments. Subjects achieved better performance when haptic and visual information about objects were consistent. Contextual haptic and visual constraints had significant effects on object manipulation.

The findings are discussed in relation to implications for human-computer interaction design.

iii

Acknowledgments

I want to thank many people who have contributed in a variety of ways to this thesis.

I wish to express my gratitude to Dr. Christine MacKenzie. my senior supervisor, For introducing me to the wonderful world of the grasping hand. Her insight in human motor systems and human-computer interaction guided me through my studies. I thank my

supervisory md examining committee members. Dr. Kellogg Booth. Dr. Tom Calvert and Dr. Dan Weeks for their guidance ;md encouragement. I also want to thank Dr. Kori Inkpen and Dr. Stuart Card for serving on my examining committee.

I am grateful to my fellow graduate students and lab members for their help and valurhle feedback at various stages of my studies: Dr. Ron Marteniuk. Dr. Evan Graham. Valerie Summers, Caroline Coo, Chris Ivens, Gayle Heinrich. Jennifer Ibbotson. Chris Bertrim. Regan Mandryk and Andrea Mason. I also wmt to t h ~ n kthe volunteers who participated

in my experiments.

I am indebted to my parents for their support and sacrifices while I studied abroad. I thank my sister. Y q i o o , for her support.

Finally, I thank my wife, Grace, for her love, patience and understanding during the course of my graduate studies.

Table of Contents Approval.................................................................................................................... ii Abstract .................................................................................................................... iii Acknowledgments .............................................................................................. iv

List of Figures ..........................................................................................................xi List of Tables ........................................................................................................... xv

.

Chapter I Introduction............................................................................................I 1 .I Background ..........................................................................................................I

1.1.1 Human prehension.......................................................................................I

I .1 . 1 .I Phases of prehension ..................................................................... 1 1.1.1.2Reaching and gr~sping.......................................................................3 1.1.1.3Object manipulation ....................................................................... 4 1.1.1.4The role of visual infomation ............................................................5 1.1.1.5The role of hilptic information.......................................................... 6 1.1.2 Object manipulation in humm-computer interdon (HCI).......................7 I .1.2.1 Object manipulation and direct manipulation ....................................7 1.1.2.2 Object manipulation in virtual environments .....................................8 1.2 Motivations md ritionde for the current research ............................................. 10 1 2 . 1 Perceptual structure of inteructive spufe ................................................... 10

12.2 The structure of object transportation and orientation .............................. 12 1.2.3 Effects of haptic and visual information ................................................... 13 1.2.4 Implications for HCI design .................................................................I 5

1.3 Research objectives ..........................................................................................

16

.

Chapter 2 Model and Methodology ..................................................................... I 7

2.1 The structure of object transportation and orientation ....................................... 17 2.1.1 Terminology .........................................................................................17

2.1.2 Structural model ........................................................................................ 18 V

2.2 Methodology for the experiments ...................................................................... 23

2.2.1 Experimental apparatus .............................................................................33 2.1.2 Experimental procedure ............................................................................ 76

2.2.3 Data analysis .............................................................................................27 2.3 Overview of experiments ..................................................................*............. 29

.

Chapter 3 Experiment 1:

The structure of object transportation end orientation ...................................

32

3.1 Introduction ........................................................................................................ 32

3.2 Met hod ...............................................................................................................34 3.2.1 Subjects .................................................................................................. 34 3.2.2 Experimental setup ....................................................................................34

3.2.3 Procedure ............................... ................................................................37 3.2.4 Data anulysis ........................................................................................... 37

3.3 Results ................................................................................................................ 38

3.3.1 Temporil measures ....................................................................................38 3.3.1.1 Concurrence .....................................................................................38 Concurrence with vision of the hand and object ....................................38 Concurrence with no vision of the hand and object .............................. 42

Effects of vision conditions .................................................................... 42 3.3.1.2 Interdependence .......................................................................... 4

3

Interdependence with vision of the hand and object .............................. 43 Interdependence with no vision of the hand and object ......................... 1 5

3.3.2 Spatial errors ................................................................................. 4 5 3.3.2.1 Constant erton ............................................................................. 4 5 45 Constant erron of distance (CED).........................................................

. . ........................................ 46

Constant errors of angle (CEA) ...............

Effects of vision conditions .................................................................... 46

3.3.2.2 Variable errors ......................................

. .....................................48

Variable errors of distmce (VED)..........................................................48 48 Variable errors of angle (VEA) .............................................................

Effects of vision conditions .................................................................... 48

3.4 Discussion ..........................................................................................................50 3.4.1 The structure of object transportation and orientation .............................. 50

Concurrence ................................,............................................................... 50 Interdependence ...........................................................................................52 3.4.2 Effects of visual feedback information ..................................................... 54

3.4.3 Implications for HCI design ...................................................................... 55 3.5 Conclusions..................................................................................................55

.

Chapter 4 Experiment 2:

The effect of controller. cursor and target sizes .............................................

56

............................................................56 4.2 Method ............................................................................................................... 58 4.2.1 Subjects ...................................................................................................58 4.2.2 Experimental setup .................................................................................... 59 4.2.3 Procedure...................................................................................................61 4.2.4 Data analysis .............................................................................................62 4.3 Results ............................................................................................................. 62 4.3.1 Temporil measures .................................................................................... 62

4.1 Introduction ...................................

4.3.1.1 Relative time courses....................................................................... 62 4.3.1.2Completion time (CT)and transportation time (TT) ....................... 63 4.3.1.3Orientation time (OT).......................................................................66 4.3.2 Spatial errors .............................................................................................70 4.3.2.1 Constant errors of distance (CED) and

constant errors of angle (CEA)...................................................................70 70 1.3.2.2Variable erron of distance (VED) ............................................... 4.3.2.3 Variable errors of angle (VEA).........................................................71

4.4 Discussion ..........................................................................................................74 4.4.1 Relative size hypothesis ......................................................................74

4.4.2 Same size hypothesis................................................................................. 75 4.4.3 The structure of object transportation and orientation ..............................76 4.4.4 Interplay of haptic and visual information ................................................76 4.4.5 implications for HCI design ......................................................................78

vii

4.5 Conclusions ........................................................................................................ 79

.

Chapter 5 Experiment 3:

The effect of spatial orientation disparity between haptic and visual displays 81

5.1 Introduction ........................................................................................................ 81 5.2 Method ...............................................................................................................84

5.2.1 Subjects ..................................................................................................... 84 5.1.2 Enperin~entillsetup .................................................................................... 64

5.2.3 Procedure...................................................................................................86 5.2.4 Data analysis .......................................................................................... 87

5.3. Results ............................................................................................................... 87

5.3.1 Tcmponl measures .................................................................................... 87 5.3.1.1 Relative time courses........................................................................ 87 5.3.1.2 Task completion time (CT)and tmnsponation time (IT) ................88

5.3.1.3Orientation time (OT) .................. . . . .....................................89 5.3.2 Spatial errors .................................. . ....................................................94 5.3.2.1 Constant errors of distance (CED)and constant erron of angle (CEA)....................................................................94 5.3.2.2Variable errors of distance (VED) ..................................................94 5.3.2.3Variable errors of angle (VEA).....................................................9 4 5.4. Discussion ......................................................................................................... 95 5 . 4 1 Optimum human performance with no disparity ......................................95

5.1.2 Effects of disparity on orientation only .....................................................96

5.4.3 Roles of hoptic md visual information ...................................................96 5.4.4 The structure of object trmsportation and orientation ..............................97

5.4.5 Implications for HCI design ......................................................................97

5.5. Conclusions ....................................................................................................... 98

.

Chapter 6 Experiment 4:

The role of contextual haptic and visual constraints ..........................................99

. . .....................................................................99

6.1 Introduction ..............................

6.2 Method ............................................................................................................LO1

viii

6.2.1 Subjects ..................................................................................................101

6.2.2 Experimental setup ........................................................................... 101 6.2.3 Procedure .................................................................................................104 6.2.4 Data analysis .......................................................................................104

6.3Results ..............................................................................................................105 6.3.1 Temporil measures .................. . ..........................................................105 6.3.1.1 Relative time courses................................................................... 105

6.3.1.1 Task completion time (CT) ............................................................. 108

6.3.1.3 Transportation time (TT) ...............................................................113 6.3.1.4 Orientation time (OT) ..................................................................... 114

6.3.2 Spatial errors .......................................................................................116 6.3.2.1 Constant errors of distance (CED) and

constant erron of angle (CEA)................................................................116 6.3.2.2 Variable errors of distance (VED) ..................................................117 6.3.2.3 Variable erron of angle (VEA)................................................. 122

...................................................................126 6.4 Discussion ..................................... 6.4.1 The role of contextual haptic constwints ................................................ 126

6.4.2The role of contextual visual constraints...................,....

................... 127 6.4.3The stmcturc of object transportation and orientation ............................ 128 6.4.4 Implications for HCI design .................................................................... 129

6.5 Conclusions ......................................................................................................130

.

Chapter 7 Summary and Discussion .............................................................. I31

7.1 Review of four experiments .............................................................................131 7.1.1 Experiment 1 ...................................................................................... 131 7.1.2 Experiment 2 ...........................................................................................133

. . .................................................................. 134

7.1.3 Experiment 3 ...................

7.1.4 Experiment 4 ...........................................................................................135

7.2 Discussion ...................................................................................................... 137

7.2.1 The structure of object transportation and orientution

........................... 137

Concurrence ...............................................................................................137

Interdependence .........................................................................................13 8

. . . . . . . . . . 139

Structursrl model ......................................

7.2.2 Effects of haptic and visual information ..............................................140 Interplay .....................................................................................................140 The role of visual information ............................................................... 141

.................................................................. 142 7.2.3 Impliclrtions for HCI design .............................. . .............................143 The role of haptic information

.

Chapter 8 Conclusions. Limitations and Future Research .............................145 Conclusions............................................................................................................135

Limitations ..........................................................................................................

146

. . . ...................................................................147

Future research .........................

References ............................................................................................................. I49

List of Figures

Chapter 2. Model and Methodology

Figure 2.1. Concunrnce of time courses of object tmsportation and orientation processes ........................................................................20 Figure 2.2. Interdependence of ti me courses of object transportation and orientation processes ..........................................................................2 1

Figure 2.3. The Virtual Hand L~boratorysetup.......................................................... 74 Chapter 3. Experiment I:

The structure of object transportation and orientation

Figure 3. I . The Virtual Hand Laboratory setup for Experiment I ............................... 36 Figure 3.2. Time courses of object transportation and orientation processes with vision of the hand and wooden cube ................................ 40 Figure 3.3. Time counes of object trimsportation and orientation processes with no vision of the hand and wooden cube ...................

41

Figure 3.4. Task completion time (CT), transportation time (TT) and orientation time (OT)with visual feedback conditions..............................44 Figure 3.5. Constant erron of distance (CED)and constant errors of angle (CEA) with visual feedback conditions ........................... 17 Figure 3.6. Variable errors of distance (VED) and variable errors of angle (VEA) with visual feedback conditions.............................49 Chapter 4. Experiment 2: The eElect of controller, cursor and target sizes

Figure 4.1. The Virtual Hand Laboratory setup for Experiment 2 .................... . .......60

.............................64 Figure 4.3. Time courses with small md large controller sizes ...................................64 Figure 4.2. Time courses at target distances 100 mm itnd 200 m m

Figure 4.4. Interaction between controller size and cunor size on trimsportation time .............................................................................65 Figure 4.5. Interaction between cursor size and the target size 6 5 on tmnsportation time ...............................................................................

Figure 4.6. Interaction between controller size and cursor size on orientation time .....................................................................................68 Figure 4.7A. Interaction between cunor size md target s i x on orientation timc at txgct dishncc 100 rnm .................................69

Figure 4.7B.Interaction between the cunor sizc and the target size

on orientation time at target distance 200 mm ........................................69 Figure 4.8A. Interaction between cursor size and controller size on variable errors of distance at LOO mm ................................................72 Figure 4.8B. Interiction between cursor size and controller size

on variable errors of distance at 200 mm ................................................79, Figure 4.9A. Interaction between cursor size and target size with small controller on variable errors of distance ................................73 Figure 4.9B. Intcriction between cursor size and target size with large controller on variable errors of distmce ................................. 73 Figure 4.10. Interiction between cursor sizc and target size on variable errors of angle ................................................................74

Figure 4.11. Interplay of controller. cursor and target size ..........................................78

.

Chapter 5 Experiment 3:

The effect of spatial orienhtion disparity between haptic and visual displays

Figure 5.1. The Virtual Hand Labomtory setup for Experiment 3 ............................... 85 Figure 5.2A. Time courses at target distance 100 mm and 200 mm ............................88 Figure 5.28. Time courses for physical object to graphic target match with disparity (30Deg.) and without disparity (ODeg.) ........................... 89 Figure 5.2C. Time courses for graphic object to graphic target match with disparity (30Deg.) and without disparity (ODeg.) ...........................89 Figure 5.3. Task completion time at torget distance 100 mm and 200 mm .................. 90 Figure 5.4. Transportation time at target distance 100 mm and 200 mrn ....................90 xii

Figure 5.5A . Orientation time at target distance 100 mrn and 200 mm .......................92 Figure 5.5B.Orientation time with task conditions und disparity conditions ..............92 Figure 5.6. Variable errors of distance with task conditions ....................................... 93 Figure 5.7. Viuiable errors of angle with tusk conditions and disparity conditions ........................................................................ 9 3

.

Chapter 6 Experiment 4:

The mle of cmtcxtual hnptic and oiwal constraints Figure 6.1 The Virtual Hand Labomtory setup for Experiment 4 ................................ 102 Figure 6.2A. Timc courses with haptic constrtlints ..................................................... 106 Figure 6.28. Time courses with visual constraints......................................................106 Figurc 6 . X . Time courses with target distances..........................................................107 Figure 6.2D. Time courses with target angles ..............................................................107 Figure 6.3. Tusk completion time with haptic constmints and target distances ........... 109 Figure 6.4. Task completion time with target distances and angles ............................. 109 Figure 6.5A. Task completion time with visual constraints and target angles in the table-sliding condition ..................................................................1 10

Figure 6.58. Task completion lime with visual constraints and target angles

in the table-lifting condition .................................................................... 1 10 Figure 6.5C. Task completion time with visual constraints and target angles

in the no-table condition .................................................................

1 11

Figure 6.6. Transportation time with haptic constraints and target distanccs .............. 1 1 1 Figure 6.7. Transportation time with target distances md angles ................................ 1 12 Figure 6.8. Transportation time with visual constraints lurd target angles ...................1 12 Figure 6.9. Orientation time with haptic constraints...........

....................................1 15 Figure 6.10. Orientation time with visual constraints and target angles ...................... 1 15 Figure 6.1 I. Orientation time with target distances and angles ................................ 116 Figure 6.12. Vuriable errors of distance with haptic constraints ..................................1 18 Figure 6.13. Variable errors of distance with visual constraints................................... 119 Figure 6.14. Variable errors of distance with target angles .......................................... 1 19 Figun 6.15. Variable errors of distmce with haptic constraints and target distances .. 120

xiii

Figure 6.16A. Variable erron of distance with visual constraints

and target distances in the table-sliding condition .................................120 Figure 6.16B.Variable errors of distance with visual constraints and target distances in the table-lifting condition ...............................

. 12 1

Figure 6.16C. Variable errors of distance with visual constraints and target distances in the no-table condition ........................................ 121

Figure 6.17. Variable erron of angle with haptic constraints ...... ................................ 123

Figure 6.18. Variable ermn of angle with \+unl constraints ....................................... 123 Figure 6.19. Virirble errors of angle with target distances .......................................... IN Figure 6.20A. Variable errors of angle with visual constraints and target distances in the table-sliding condition .............................. . 124 Figure 6.208. Viable errors of angle with visual constraints

and target distances in the table-lifting condition ..................................125 Figure 6.20C. Variable errors of angle with visual constraints and target distances in the no-table condition.............................,.... 125

xiv

List of Tgbles

.

Chapter 7 Summary and Discussion

Table 7.1. Experimental conditions for Experiment 1 ................................................. 132. Table 7.2. Experimental conditions for Experiment 2 .................................................133 Table 7.3. Experirnrnlsl condi~ionsfor Expri~nrni3 .................................................134 Table 7.4. Experimental conditions for Experiment 4 .................................................136

Chapter 1

Introduction

Virtual environments afford a new paradigm for human-computer interaction (HCI), and provide n new testbed to ev;rluatc humun performance. This reseurch deals with object manipulation, one of the basic forms of humm-computer interiction in today's computer systems. Motivations for this rcscarch are twofold: to understand thc mechanisms underlying human object muni pulation in virtual environments, and to provide implications for human-computer interiction design.

1.1 Background

There are two streams of research on human object manipulation in the literature. object manipulation in the real world and object manipuliltion in the context of using computers.

In the human motor control community, studies on object manipulation in the real world fwus on human prehension. In the HCI community, reseurch on object manipulation is related to direct mmipuhtion interfaces and virtual environments.

1.1.1 Human prehension 1.1.1.1 Phases of prehension

Object manipulation is the part of human prehension where motion is imputed to an object. Motion can be imparted dynamically through changing contacts, or r stably grasped object can be transported and oriented with constant contacts. Human

prehension i s defined as the application of functionally effective forces to an objcct for a task given numerous constraints (MacKenzie and iberdl, 1994). This definition emphasizes the task-specific and functional aspects of the problem. The functional demands on prehension include applying forces, imparting motion and gathering sensory information to achieve a goal. Human prehension has n hierarchical structure. involvinz activation in pardlel and coordinated control of severil components or subsystems. Prehensile behavior an be constrained at the sensorimotor level, physical level m d higher levels. On the other

hand, human prehension unfolds in serial order where unique events or actions occur in different phases. MacKcnzie and Iberull( 1994) have defined the phases of human prehension in terms of opposition space. An opposition space is determined by the status of the opposition and virtual fingers. The opposition describes the basic directions along which the human hand can apply forces: pad opposition. palm opposition and side opposition (Arbib, [berill and Lyons, 1985). MacKenzie and Iberall view human prehension as r process of planning, setting up, using and releasing an opposition space, in a palm-centered coordinate system. During the phase of planning an opposition space, humans perceive intrinsic and extrinsic object properties. select a grasping strategy and plan a hand location and palm orientation, based on motivations and task context. In setting up an opposition space, humans preshape the fingers into a proper posture. m d transport or orient the palm to the object. In the phase of using an opposition space, the hand contacts the object by capturing the object and maintaining stable grasping. During the final phase, the gripping forces decrease md the hand releases an opposition space by letting go.

1.1.1.2 Reaching and grasping A large body of research on human prehension has focused on the phases of reaching and

grasping before contact with an object. There arc separate transport md gnsp (hand shaping) components when reaching to grasp an object (Jelmnerod, I98 1, 1984, 1986).

The transport component carries the hand to the target location, while the hand shaping forms a proper opposition space for upcoming gridsping. Jeannerod ( 198 1. 1984. 1988) suggested that there are two independent visuomotor channels for sepmtely controlling the two components. He indicates that these two components appear to be controlled by different briin areas. Transport mainly involves proximal muscles, while the hand shrping mainly involves distal segments. The hypothesis of "two independent visuornotor channels" for limb transport and hand shaping has received considerable electrophysiologicnl, neuropsychologicrl md psychophysical support (Palastanga. Field, and Soames. 1994; Paulignrn and Jeannerod, 1996). Furthermore, Jeanncrod (198 1. 1984. 1988) observed that the proximal control for the trmsport component is more based on perceptual information about extrinsic object properties. like spatial locarion. In contrast, the distal control for the grasp component is more based on information about the intrinsic properties of the object, such as size md shape. Evidence shows that changes in target size primarily affect the hand shaping

component, ix.. perk aperture between the thumb and index finger. not the tmnsport component (Wallace and Weeks. 1988). However, Marteniuk, MucKenzie and Leavitt

(1990) reported an invariant time to peak deceleration, but a relative lengthening of the time after peak deceleration to object contact for the smallest object, consistent with the effect of target size on pointing. Contrary to independent channels for intrinsic and extrinsic properties, Jakobson and Goodale (1991) reported also that target size md target

distance affected kinematic landmarks for both the transport and grasping components. Another observation is that the peuk aperture increases with the peak transport speed of the hand (Wing. Turton and Fnser, 1986). Jeannerod (1981, 1984) reported a temporul coupling as revealed by correlations between the time of peak deceleration of the wrist md the time of peak aperture of the

grip. It appears there is a communication link between the visuomotor channels controlling grip formation and transport of the hand (Sivak md MacKenzie, 1992). Jeannerod (1984) suggests that there is a central program or pattern for the coordination of the transport component with the grasp component. Arbib (1985) assumes thilt there is

concurrent activation of two motor schemas for reaching to grasp. One schema moves the arm to trmsport the hand towards the target, md the orher preshapes the hand, with the finger separition and orientation guided by the output of the appropriate perceptual schemas.

1.1.1.3 Object manipulation

There is r relative lack of research on object manipulation, compared to the kinematic studies of setting up an opposition space before an object has been acquired (MacKenzie

and Iberall, 1994). Fitts' tasks cm be considered as r special case of object manipulation where P pointer or stylus is used as the object being manipulated (Fitts. 1954). Fitts' law states that the movement time increases with the increases in target distance and decreases in target size. There is o tradeoff between movement speed md accuracy. Fitts' law has also been established for pointing tasks without m object in hand (Gnhum

and Mackenzie, 1996).

Human motor control research triiditionully has not distinguished between human performance with or without m object in hand. Recently. researchers have attempted to extend the "two independent visuomotor chmnels" hypothesis of Jeannerod (198 1, 1984, 1988) to object transportation md orientation. but in generd, results are not supportive

(Desmurget. Prilblanc, Arzi, Rossetti and Paulignun. 1996). Research by Desmurget et PI. (1996) shows that am transportation md hand orientation arc neither implemented nor

controlled independently. Their results reveal m integr~tedcontrol of hand transportation and orientation during prehension movements. Soechting and Flsndcn (1993) conducted a series of cylinder-matchingexperiments, measuring the end-effector errors in location and orientation matching. They suggested that there are prrdlel, interdependent channels

for location and orientation in sensorimotor transformrtions for reaching and grasping.

1.1.1.4 The role of visual information

Lacking visual feedback about hand movements generilly decreases rccurxy and causes grasping movements to be more conservative (Wing et al.. 1986). In the deafferented patient, vision is especially helpful for the fine adjustment in grasping (Rothwell. Tnub,

Day.Obeso, Thomus and Marsden, 1982). Without visual information, the affected hand cannot be preshaped properly, md the transport component is also affected (Jemnerod. 1986). It appears that with intemption of somatosensory pathways, subjects need visual

infonation in order to configure the hand. In the notion of two phases of pointing movement. the second phase, the "home in" phase, may be due to visual feedback correction (Woodworth, 1899; Wclford, 1976; Crossman and Goodeve, 1983; Schmidt, 1988). However, evidence shows that the second phase exists even without vision. possibly d w to r centrally generated part of the prehension pattern (Jemnerod, 1984).

When visuai information is available, it will be used (Keele and Posner, 1968). Woodworth ( 1899) observed that visual feedback is more important for u slower movement than for a last movement. For human prehension, Wing et 11. (1986) suggest that when critical information like vision of the aperture is provided, other sensory information may not be necessary for the fine adjustment (Wing et al.. 1986). Posner and colleagues demonstrate evidence of visual dominance (Posner. Nissen and Klein. 1976).

They found that, when conflicting visual and haptic information was presented. the subjects' behavior relied on visual infonnation. c. g., on what they saw rather than what they felt.

1.1.1.5 The role of haptic informatlon

All sensory and munipulutive tasks performed actively with the normal hand involvc tactile and kinesthctic information. The tact1le information, referring to the sense of contact with the object, is mediated by the responses of low-threshold mechnnoreceptors innervating the skin within and around the contact region (Srinivilsrm, 1994; Kaczmarek. and Bach-Y-Rita, 1995). Kinesthetic information refers to the sense of position and motion by the sensory receptors in the skin around the joints, joint capsules, tendons. and muscles, together with neuml signals derived from motor commands (Loomis and Lederman, 1986). Only tactile information is conveyed when objects contact a passive, stationary hand, except for the ever-present kinesthetic infonnation about the limb posture. Only kinesthetic information is conveyed during active, free motion of the hand, although the absence of tactile infomation by itself conveys that there is no object in

hand (Srinivasan, 1994).

Haptics is a perceptual system that uses both cutaneous and kinesthetic inputs to derive information about objects. their properties, and their spatial layout (Lommis and Lederman. 1986). The hand is both an input and output device for humans. It can sense and act on the environment at the same time. The tight coupling of afferent and efferent information is the feature of haptics. Gibson has made a distinction between passive md active touch in haptics (1962. 1966). Active touch is for exploratory manipulation tasks that intend to derive infonation about the intrinsic properties of objects (Loomis cmd Lederman, 1986). Passive touch i s for performatory mmipulation tasks that require moving an object from one position to mother. The object manipulation in this study is in the category of performatory manipulation tas ks.

1.1.2 Object manipulation in human-computer interaction (HCI) 1.1.2.1 Object manipulation and direct manipulation

Direct manipulation is one of the most important concepts in modem computer interf~ce design theories (Shneidermm, 1983. 1992; Norman and Dmper. 1986; Rasmussen, 1986). Direct manipulation interfaces employ object manipulation as the basis of humancomputer interaction. Shneiderrnan (! 983) offers a syntactic-semantic-objects-actions (SSOA) model of user behaviour to explain the underlying basis of direct manipulation.

Semantic knowledge is the user's knowledge in long term memory which can be separated into the computer and task domains. Within these domains. know ledge is further divided into actions and objects. The syntactic knowledge is varied, device

dependent, acquired by rote memorization, and easily forgotten. The strength of direct manipulation is in focusing on the task domain, reducing the mental load for the computer semantics, and demanding less syntactic knowledge (Hutchins, Hollan and

Norman, 1986; Norman, 1988). Studies on object manipulation will benefit our understanding on human motor control systems and direct manipulation interfaces. The promise of direct manipulation interfaces is to efficiently transfer human real world rnanipulati ve skills to operations on the computer. Direct manipulation interfaces such as graphic user interfaces (GLJI)have become standard for modern desktop computer systems. Such systems are generilly equipped with a pointing device such as a mouse, a trackball or a joystick. Operations using a pointing device can be considered a form of object manipulation. Unlike in the real world. the haptic display of pointing devices is separated from its visual display, the graphic cursor. Studies have been conducted to compare various pointing devices (Card. English and Burr. 1978: Karat. McDonald and Anderson. 1984; Mithal and Douglas. 1996). Fitts' law has been widely employed as an engineering model for these studies (Card, Moriln and Newell, 1983; MacKenzie. 1992). These studies were generally limited to two-dimensional object manipulation such as mouse movements on a desktop. It has been found that human performance follows Fitts' Irw for pointing tasks on desktop computer settings.

1.1.2.2 Object manipulation in virtual environments Object manipulation in virtual environments has unique features compared to desktop environments. First, virtual environments usually require multiple dimensional object manipulation. Secondly, object manipulation is the primary form of input in virtual environments while text command input rarely takes place. Thirdly, virtual environments ;rre immersive where a

user is generally provided with r stereoscopic, head-coupled view.

Systematic studies on object manipulation in virtual environments are relatively few. Most research focuses on the comparison and evaluation of input devices md

techniques (Zhai. 1995; Hinckley. Tullio, Pausch. Proffitt and Kassell. 1997). Zhai and

Milgram (1993) proposed manipulation schemes for six degree of freedom (DOF) target acquisition tasks. They systematically examined the effects of two sensing modes (isotonic and isometric) and two control modes (position and rate) of input devices. They found strong petformnnce advantages for isometric sensing combined with rite control

and for isotonic sensing combined with position control. Arthur. Booth and Ware (1993) conducted i.~study on human 3D task performance in "fish tank virtual reality", referring to the use of a desktop computer to achieve real-time display of 3D scenes using

stereopsis and dynamic head-coupled perspective. They found that subjects performed

3D tasks with less error rutes for head-coupled stereo over a static (nonhead-coupled) display. A few studies have addressed the relationships among different dimensions of

object manipulation in virtual environments. Zhai and Milgnm (1997) show that object manipulation in 3D space demonstrates m anisotropic feature where humans have more difficulties in controlling the Z dimension (into the computer screen) than the X and Y dimensions. Their results have been confirmed by recent studies by Boritr (1998). Zhai

and Milgram (1997) also found that subjects had difficulties sirnultmeously transporting md orienting an object in a vinual environment. These results were consistent with the

same observation by Ware ( 1990). Ware (1 998) suggested that these difficulties resulted from the separation of display domain and control domain in virtual environments. Jacob. Siben, McFarlune and Mullen (1994) investigated integrdity md separability of input devices in the context of task domain. They found that the integrality and separability of control dimensions depended on the perceptual structure of visual

information processing. it notion elaborated by Gamer (1974). Even equipped with a 3D

input device, their nsults showed that the object size c w be simultaneously changed with the object location, but the object color cannot. The theoretical framework by Jacob et (11. ( 19%)

is closely related to the present rcseivch and will be discussed in more detail in the

next section.

1,2 Motivations and rntionale for the current research 1.2.1 Pemptual structure of interactive space

According to Gamer's theory of the perceptual structure of visual information, a multiple dimensional object can be charactcrized by its attributes into two categories: integril structure md separable structure (Gamer, 1974). Visual information has an integral structure if its attributes can be perceptually combined to form a unitary whole, e.g., lightness and silturition of a color. If visual object attributes demonstrate perceptually distinct and identifiable dimensions. they we separable. For example, the grey-scale and size of m object have a separable structure. The type of the perceptual structure of an object can be determined by direct similiuity scaling methods. An integrd structure shows a Euclidean distmce, while a separable structure demonstri~tesa city-block distance in the perceptual attribute space (Gamer. 1974). Jacob and colleagues ( 1994)extended G m e r ' s ( 1974) notion of integral md separable stmcture to inteructivc tasks by observing that manipulating a graphic object is simply the changing of values of its attributes. They reasoned that since the attributes of m object define a perceptual space, changing these values is the same as moving in real

time within the perceptual space of the object. They predicted that the interaction movement in an integral space should be a Euclidean, stnight-line distance between two points, and movement in a separable space should be o city-block distance and run

parillel to the axes. In turn. the pattern of Euclidean distance and the city-block distance indicates the type of perceptual structure. Jacob et at. (1994) also extended the notion of integral and separable structure to describe the attributes of an input device. based on whether it is natural to move diagonally across all dimensions. With an integrd device. the movement is in Euclidean space and cuts across 111 the dimensions of control. A separable device constrains a movement to be along one dimension at a tirnc. showing o city-block pattern. They hypothesized that human performance improves when the perceptual structure of the task matches the control structure of the device. They conducted an cxpcriment in which subjects performed two tasks that had different prccptual structures. using two input devices with correspondingly different control structures, an integral three-dimensional tracker and a sepwble mouse. The intcgral task was the control for a graphic object's location and size. md the seprmble task was the control for m object's location and brightness (grayscale). Their results converged to support their hypothesis. They concluded that the interplay between task and device was more important in determining performance than either task or device done.

The framework proposed by Jacob et al. has o significant influence in today's human-computer interiction research. particularly, in the areas of computer input deviccs and multiple dimensional object munipulation (Zhai. 1995; Balakrishnan. Boudel. Kurtenbach and Fitzmnurie. 1997; MacKenzie, Soukoreff and Chris. 1997; Boritz. 1998). The current research will funher explore the structure of object transportation and

orientation in light of Jacob et al.'s framework.

1.2.2 The structure of object transportation and orientation

Object manipulation by the human hand is a process with two distinctive components: object translation and object rotation. Object translation and rotation are achieved by the trinsportmion and orientation of the human hand with a goal defined by the target properties. At (I descriptive level, task requirements for object translation md r~tationare separdble from each other: two tasks could be clvried out in either serial, overlap. nr pmllel manners in rhe time domain. Thus, the structure of object transportation and orientation processes can be examined in terms of the concurrence between the timc courses of the two processes. From the viewpoint of the structural construction for the two processes. a parallel structure is the most efficient one. The total task completion time for the purdlel structure is equal to the completion timc of the longest process. The serial structure is the least efficient one: the total task completion time is the sum of completion tirncs for each of the two processes. The efficiency of un overlap structure is somewhere between those of parallel and serial structures. Humans should have a kind of temporal organization to coordinate the two proccsses in a constructive way to reserve

their resources and energy. Another aspect of the structure of object transportation and orientation concerns the interdepndence between the two processes. By using distances curd angles of' a target as independent variables, the task cm be broken into two components which,

correspondingly, require object translation and rotation. Interdependence examines if the two processes affect each other. The structure of object tnnsportation and orientation is defined and discussed in detail in Chapter 2. Evidence supporting the "two independent visuomotor channels" hypothesis (Jernnerod. 1984; Paulignan and Jeannerod. 19%) is largely based on specific kinematic

measures of hand reaching and grasping components. The measures are various. ranging from the peak velocity to the end-effector errors. It is hard to argue which measure is more representative for each process. Jeannerod (1984, 1988) used the peak velocity of hand transportation and the maximum aperture of hand shaping as indicative of reaching and grasping components respectively. Jeannerod's ( 198 1, 1984)original form of "two independent visuomotor chunnels" hypothesis was intended for hand reachins and hand shaping before contact with an object, and should not be automatically extended to object

transporting and orienting (Paulignim and Jeannerod. 1996). There is evidence showing the interdependence between object trunsportiltion and orientation. Soechting and Flilnden ( 1993) found thut persistent errors of matching a cylinder to an oriented target depended on target location md the arm posture as well us target orientation. They implied that the neural trilnsformation from target orientation to hand orientation is not independent of the trmsformution dealing with target location. Desmurget et al. ( 1996)asked subjects to reach and grasp cylindrical targets presented at

a given location, with different orientations. During the movements, target orientation was either kcpt constant or modified at movement onset (perturbed trials). They concluded that m tnnsport and hand orientation were neither planned nor controlled independently. They further suggested that an integrited control of hand transportation and orientrtion is programmed, from an initial configuration, to reach smoothly a final posture that corresponds to r given "location and orientation" as r whole.

1.2.3 Effixts of haptt and visual information

In human-computer interaction, human performance is constrained by system environments, task requirements and cumnt technologies. Virtual environments can

simulate the real world to a certain extent. but they also lose or distort the haptic and visual representation of the real world. The sensory infonnation presented by the computer. such as visual information and haptic information. is usually distorted or impoverished in comparison with that in the real world. Some object manipulation skills

may be realized or transferred as they are in the real world. but some of them may be impeded or filtered out, depending on the properties of the interface. On the other hand. human performiince can be enhanced by the features o f computer-mediated environments. One i mponant feature of object manipulation in computer-mediated environments is that the manuul control space md visual display space are usually sepmted (Srinivasm. 1994). In the real world. haptic and visual information of m object is consistent. In other words. the haptic and visual displays of ;m object are superimposed. that is, what we see is consistent with what we feel. The relationship between haptic infomarion of an object and its visual reprcsentation is different between the real world and virtual environments. For example, in a standard desktop setup. the mouse is controlled in hand space, but its visual representation, the cursor, is moved in display space (Graham and Mackenzie. 1996). The cursor and the mouse we inconsistent in location. shape and size. Object manipulation is a process relying on the visual and huptic information of ;m object

presented to the user (MacKenzie and IbemII, 1994). This raises the question

how changes in the relationship between visual and haptic information of objects may

affect human performance. Is visual information alone sufficient to perform object manipulation. as suggested by the evidence of visual dominance (Posner et al.. 1976)?

Cm humans achieve better performance if the visual information of an object is augmented with haptic information, as suggested by Graham and MacKenzie (19%)? Do

hilptic and visual information play different mles in certain aspects of object manipulation such as the transportation component and orientation component? It is important to identify the role of visual information and haptic information in object manipulation. Virtual environments ;ire immersive. Human object manipulation in such environments cm be influenced by the contextual constraints of visual and hrpric information as well as the domain constmints (MacKenzie and Mnrteniuk. 1985: Rasrnussen. 1990). Domain constriints are the haptic md visual information provided by the object itself such as size and shape. Contextual constraints arc surrounding

information presen tcd during object mmi pulation. For example, the graphic bac kground provides the contextual visual constraint and the physical table that limits the hand movement imposes the contextual haptic constmint. Contextual consmints not only provide realism in r virtual environment, but also affect humw performance. Some constraints may enhance human performance while others may degrade human performance. It is important to identify md understand the effects of such contextual constraints.

1.2.4 implications for HCI design

Vinual environments provide a new way for human-computer interaction. It is critical to take a user-centered approach to virtual environment design (Norman and Dnper. 1986). Many studies on object manipulation in virtual environments start with a novel design or technique and then evaluate the usability of such design or technique. Results from these studies ;ue often system specific a d lack generality. Few studies have been done to address the mechanisms underlying human performance in virtual environments ( W m .

1990, 1998; Arthur et al., 1993; Zhri and Milgram, 1993, 1997; Jacob at al., 1994;

Graham and MacKenzie, 1996, Boritz, 1998). To make the object manipulation in HCI "naturul" or realistic is only one aspect of the interface design (Hoffman, 1998). Another aspect is the effectiveness of thc interfxe for object mmipuliltion. HCI environments may add extra cornplcxity to object manipulation due to the distortion or deprivation of scnsory information available in the red world. At the same time, simplified, augmented or enhanced critical information presented in HCI may improve human performance. "Niitunl" object manipulation is not necessarily the "optimal" object manipulation. Onc of the mujor challenges for uscr

interface design is to identify the information that is critical to certain tasks in order to enhance human performmce in a computer environment.

In this study, we use a virtual environment as a testbed to systematically investigate human performance in virtual environments. Findings from this study can enrich the theoreticill framework and provide implications for HCI design.

1.3 Research objectives Object manipulation is an important topic both in human motor control and humm-

computer interaction. The objectives of this study arc: I. To identify the structure of object transportation and orientation in virtual environments;

2. To investigate the role of hrptic and visual information in object transportation and orientation;

3. To provide implications for human-computer interaction design.

Chapter 2

Model and Methodology

Object manipulation in virtual environments generillly involves both object trmsportrtion and orientation. In this chapter, the structure of object trilnspotti~tionand orientation processes is defined. The structure is exitmined in terms of the concurrence and interdependence between the two processes. Then, the experimental apparrtus and procedures are described in detail. Finally, an overview of four experiments conducted in this rescilrch is provided.

2.1 The structure of object transportation and orientntion 2.1.1 Terminology

There are generally three essential objects for successful object munipulntion in virtual environments: the physical object, the cursor object and the tuget object. The physical object is the controller or input device such as a mouse. The cursor object is the graphic object driven by the physical object. The cunor object can be considered as the visual representation of the physical object. The tiirget object determines aspects of the goal of object manipulation. Object manipulation can be defined as on interaction among these three objects. Haptic and visual information presented by the physical object, cursor object and target object we termed domain constraints in this thesis. There is other information available for object manipulation in virtual environments: the information surrounding these three objects. For example, the graphic background and spatial constraints may be 17

implemented for object mmipulotion. We refer to the surrounding information for object manipulation rs contextual constraints. All objects in this research are rigid bodies. The spatial position of o rigid body can be completely determined by six degrees of freedom (DOF), three for its location and three for its orientation. Therefore, the state of object manipulation can be spatially described in terms of ob.iect translation and object rotation (Kawnto. Uno. lsobe and Suzuki. 1987). In this study. the measurement was made on the physical object in the

hand space. The physical object was the end-effector manipulated by trinsporting md orienting the human hand. The terms "object transportation and orientation" were chosen over "object translation and rotation" to reflect that object manipulation is a motor output

process of the human hand (Cunningham and Welch. 1994).

2.1.2 Structural model

The structur;ll model was developed first to decompose the object transportation and orientation processes, and then to examine the relationship between the two processes. The object transportation and orientation processes are demonstrated in Figure 2.1 and 2.2. The solid line represents the object transportation process and the dashed line is the

object orientation process. The transportation process i s assumed to use the target distance as the input, the transportation time (TI') as the temporal output, m d distance erron as the spatial outputs. The orientation process is considered to use the target angle

as the input, the orientation time (OT) as the temporal output, and angle errors as thc spatial outputs. The structure i s defined as the relationship between the transportation

and orientation processes. The structure is examined with two aspects: concurrence and interdependence.

Concurrence indicates the relative time course between the transportation process and the orientation process. Three possible time courses between the two processes may

occur: pi~allel,overlap and serial, as shown in Figure 2.1. Different structures can be determined by the relative starts and ends of the two processes. A parallel structure represents that one of two processes starts at the same time or earlier as another process and ends at the same time or later as the other process. Cue A in Figure 2.1 is n parallel structure where the trimsportation and orientation processes stan and end at the same t i me.

Case (A) indicates a special situation of parillel structure: the transportation

process starts earlier and finishes later, and therefore completely contains the orientation process. Case B shows that there is an overlap between the end of the transportation process and the beginning of the orientation process. A serial structure means that the two processes itre executed one-by-one: the second process will not start until the first process has finished. As shown in Cilse C in Figure 2.1. the orientation process does not start until the transportation process has finished.

In terms of task completion time, a parallel structure of two processes is most efficient. A serial structure is the most time consuming. The overlap structure sits between the serial and parallel structure, and can be cost-effective and practical in some cases. There is no optimum structure without the context of tasks.

A. Parallel End

Start

(A). Containment ~~~aamo~mmaao

Start

End

8. Overlap End

C. Serial Start

Figure 7.1. Concurrence of time courses of object transportation and orientation processes.

IT = object transportation time (solid line); OT =

object orientation time (dashed line).

End

TT

Input. Start

End

Change In transportation input affects tranrportation output only

Start

End

Change in transportation Input affects both transportation and orientation outputs

Figure 2 2. Interdependence of ti me coulses of object transportation md

orientation processes.

Interdependence is defined as the interaction between the object transportation md orientation processes in this study'. We adopt a method used in computer

architecture to test the interdependence between two processes (Hwang, 1993). According to the conditions proposed by Bemstein ( l966), two processes are independent. if. md only if all three following conditions are met (Hwang. 1993): 1. The input of the fint process has no effects on the output of the second process:

1. The input of the second process has no effects on the output of the first process: 3. The output of the fint process and the output of the second process have no effects on each other.

If any of Bemstein's conditions is violated, two processes are interdependent. In the literature on object manipulation. the relationship between the tr~nsportationprocess and the orientation proccss has not been examined rigorously using Bemstein's conditions. In this study, interdependence between transportation and orientation processes are primarily examined with Bemstein's first md second conditions. The third condition tests the sequential effects of two processes: whether the output from one process becomes the input to another process. The sequence of the two processes is not o major concern for this research.

Task requirements for object transportation and orientation c m be instrumented by specifying the target distance and the target mgle. respectively. For the object

transportation process, the target distance is the input end the tmnsportation time (TT)is the output; for the object orientation process, the target mgle is the input md the

The terminology used in this study may k different from some litcraturc. Some researchers used "pprallcl" to describe no interaction between two processes (Socchting and Flanders, 1993). Wc used "pwallel" strictly for the relative time courses of two processes, as shown in Caw A of Figure 2.1 . Wc used intcrdcpendencc to describe the interaction between two processes.

orientation time (OT) is the output. Thus, Bcmstein's conditions can be applied to test the interdependence of the two processes. As shown in Case A in Figure 2.2. the input for the trinsportation process only affects the IT output. not the OT output. It i s therefore concluded that the orientation process is independent of the transportation process. In contrast, if the input for the transportation process changes the output of the orientation process, OT,rs demonstrated in Case B, the orientation process depends on the transportation process. Vice versa. if the input for the orientation process affects the

TT output. the transportation process is interdependent on the orientation process.

2.2 Methodology for the experiments 2.2.1 Experimental apparatus

The Virtual Hand Laboratory (VHL) was used to conduct the experiments for this research. As shown in Figure 2.3, the VHL setup consisted of an OfTOTRAK motion malysis system including a number of infrared emitting diodes (IRED)markers (Northern Digital. Inc.), SGI Indigo2 Extreme computer system (Silicon Griphic Inc.). and CrystalEYES goggles (StrreoGriiphics). Three infrared markers were fixed to the side frime of the goggles. The 3D position information from these threc IREDs was monitored with the OPTOTRAK system. Another three [REDS(not shown in Figure 2.3) wooden or plastic cube, held in the hand. If were placed on the top of a physical object. i,~ required, a graphic cursor cube could be displayed in real time, driven by the 3D position information from the REDSon the physical object. The subject was wearing CrystrlEYES goggles to obtain a stereoscopic, head-coupled view in the VHL. The graphic display was updated at 60 Hz, with about 1 frame of lag and 0.2 mm spatial errors

of OPTOTRAK coordinates.

23

Figure 2.3. The Virtual Hand J~boratorysetup.

The graphic display was presented with a SGI RGB monitor of 1280 by 1024 pixels over an illuminated area of 350 by 280 mm. In the VHL, the display space and the hand workspace were superimposed via a mirror. As shown in Figure 2.3, the monitor was placed screen down on a specially constructed cart. A half-silvered mirror was placed parallel to and between the computer screen and the table surface. The image on the screen was reflected by the mirror. and was perceived by the subject as if it were in a workspace on the table surface. The images for the left and right eye were alternatively displayed and were synchronized with goggles to provide the subject with a stereoscopic view. There was a light under the mirror (not shown in the figure) to control the visual conditions. When the light was on. the subject could see through the mirror, thus providing visual feedback for the hand and the physical object. When the light was off. the subject could see neither the hand nor the physical object. The image displayed on the screen was always visible md appeared to the subject in the workspace. regardless of whether the light was on or off. The physical object, a wooden or plastic cube. was the controller, serving as the input device. Among the three IREDs on the top of the controller cube. IRED I was at the center, IRED 2 was at the left comer and IRED 3 was

the right comer. The 3D

position information from these three IREDs was used to drive a six-degree-of-freedom wireframe graphic object. the cursor cube (not shown in Figure 2.3). The 3D p s i tion data from these REDSon the physical cube was also recorded for data analysis. The target was a wireframe gnphic cube generated on the monitor, rpparing on the top of the table surface for the subject looking into the mirror, as shown in Figure 2.3. The graphic target was placed along the horizontal center axis on display. which was aligned with the

subject's body midline. The target angle was generated about il vertical axis. A thin physical L-frame (not shown in the figure) was used to locate the st;uting position of the physical cube. at the beginning of each trial.

2.2.2 Experimental procedure Prior to each experimental session, the display space was calibrnted with the workspace (see detailed description of crlibrition procedures by Summers. Booth. Calvert. Graham

and MacKenzie, 1999). Four graphic crosses were displayed. and four IREDs were placed on the table surface at the locations of the crosses. The display space and workspace were then calibrated m d aligned according to the information of these four [REDSsensed by the OPTOTRAK system. When the graphic cursor cube was used as in Experiments 2. 3 and 4, a calibration was made between the physical cube and the grtphic cunor cube. The spatial relationship between the physical cube and griphic cunor cube was pre-determinedby the three IREDs on the top of the physical cubc. According to the experimental conditions, the cursor cube could be calibrated to be superimposed with the physical cube, disoriented from the physical cube, or scaled up or down in size from the physical cube.

Upon the amval of a subject. the individual eye positions of the subject were calibrated to get r customized stenoscopic, head-coupled view. The subject was wearing the CrystrlEYES goggles with three [REDS on the side facing the OPTOTRAK camera.

Two bars, each with a small pinhole. were provided. The subject was asked to align the ban over the goggles, one for the left eye md the other for the right eye. Each eye had a monocular view of target through the pinhole. The bars were then adjusted so that the subject could see the same fused image with both eyes through the pinholes looking

straight ahend, us though there was only n single hole. Two REDS were then placed to cover the two holes. The 3D position information of the two IREDs on the burs and the three IREDs on the goggle side was captured by the OPTOTRAK camera. These data were processed and analyzed to cstimatc the eye positions relative to the head coordinate system for each subject. The sub-jectwas comfortably seated at a table. with the f o r e m at approximate1y the same height as the table surface. The subject held the wooden or plastic cube with the right hmd, with the thumb and index finger in pad opposition on the center of opposing cube faces which were parallel to thc frontal plane of the body. The task was to align or match a physical cube or graphic cursor cube with the location and orientation of the graphic target cube. Both object transportation and orientation were required. The subject was asked to match either the physical cube or the griphic cursor cube to the graphic target cube as fast and accurately as possible. The target distances and angles were rmdomly ordered over trials, as experimental conditions. In Experiment 1, a mouse was opemtcd with the subject's left hand to control the start and end of a trial. In other experiments, the experimenter controlled the start and end of a trial with the mouse on another table, to avoid the possibility of control interference between the two hands of the subject. The experiments were conducted in r semi-dark room.

2.2.3 Data analysis

OPTOTRAK 3D position data collected from two lREDs on the top of the physical cube were malyzed. During data analysis, object tnnsportiition data were obtained from IRED

I at the top center of the physical cube. Object orientation data were derived from both

IRED 1 at the center and IRED 2 at the left comer of the top of the cube. The original

IRED position data were interpolated for missing frames. Data were filtered with o 7 Hz

low-pass second-order bi-directional Butterworth digital filter to remove digital sampling artifacts, vibrations of the markers, and trernor from the hand movement. Data were filtered only once, and then were used for the following data manipulation including angular data computation for object orientation. A computer program determining the start md end of a movement was used for

the transportation and orientation processes seprrately, based on criterion velocities (Graham and MacKenzie, 1996). The computer progriim found the first occurrence in a trial of the peak criterion velocity for a process, md then worked backwards looking for the f h t occurrence of the start criterion velocity. This was then used as the start of the process. The end of the process was determined by looking fonvards for the mean criterion velocity as an rvcruge over n predetermined number of fmmes. This program was applied with two sets of criterion velocities, respectively. for object transportation and orientation processes. The total task completion time was determined by the earlier

stan and the later end of either object transportation or orientation. The stan and cnd of each process were confirmed for each trial by visually inspecting a graph of the velocity profile. A trial was rejected if the program failed to find a stm md end or there was disagreement between the experimenter's visual

inspection and the computer's results. Less than 1% of trials were rejected for each experiment. Temponl dependent measures of object manipulation were task completion time

(m),transportation time (TI')orientation , time (OT).and relative time courses of transportation and orientation processes. CT was defined as the total movement time to complete the task, which should be equal to or greater than the longer one of the

triinsportation and orientation processes. 7T was the triisportrtion time determined with transportation data from IRED 1 at the center of the cube top. OT was determined with orientation data culculuted from the rotation of the cube around a verticul axis. The relative time courses of object trinsportation and orientation processes were determined with the starts and ends of the two processes. Spatial errors of object transportation md orientation were measured in terms of constant erron and variable errors over the trials in each experimental condition. Constant errors were the average difference in distance or angle between the end position

of the object being manipulated and the targeted position under an expcrimentul condition. Variable errors were the standard deviation of the trials under an experimental condition. Spatial error measures were constant emon of distance (CED). constant errors of angle (CEA), variable errors of distance (VED) md variable errors of angle (VEA).

2.3 Overview of experiments The objectives of this research were to identify the structure of object trisportotion md orientation in virtual environments, to investigate the role of haptic and visual information in object transportcrtion and orientation, and to provide implications for humantomputer interaction design. The experiments were designed to examine human performance on object manipulation rithcr than to test the properties of specific computer systems. A series of four experiments was conducted: Experiment 1: The structure of object tnnsportrtion and orientation; Experiment 2: The effect of controller, cursor md target sizes; Experiment 3: The effect of spatial orientation disparity between hrptic and visual

displays;

Experiment 4: The role of contextual haptic and visual constraints. Experiment 1 was designed first to establish the structure of object trmsportation and orientation as a reference point for subsequent experiments. Target distances and angles were manipulated as two independent variables. From the structural model presented earlier in this chapter, the truqonation process had target distance as its input. and transportation time (TT)as i t s temporal output. The orientation process used target angle as its input. m d orientation time (OT) as its temporal output. Errors of target distances md angles were the spatial output of transportation md orientation processes. respectively. Tasks were performed under two visual feedback conditions: vision condition and no vision condition. Under the vision condition. subjects werc able to see the hand and the controller. a physical cube. as well as the target. With the no vision condition, the hand and the controller were not in view during object mmipulation. The vision condition was similar to the one under which humans perform object manipulation in the real world except that the target was a graphic display. Therefore, object manipulation under the vision condition can be thought of as "natural" object manipulation. In contrast, the no vision condition was compietely "virtual", i.e.. hardly ever taking place in the real world. The visual feedback conditions in various virtual environments probably lie between these two extreme vision conditions. Experiment 1 set a nnge of human performance in object manipulation under two extreme visual conditions, md laid the framework for the following experiments. Experiment 2 investigated the interplay of haptic and visual information in virtual environments. We used o common attribute of the controller, cursor and target as the independent variable: size. The controller, the cursor and the target each had a small and a large size. The controller was in the hand's workspace, the target was in the display

space, and the cunor was presented in the display space. but controlled in the hand's workspace. The interaction among the target, cunor md controller size should reveal the role of haptic and visual information for object transportation md orientation. Experiment 3 further explored the role of haptic md visual information in object manipulation. Conflicting haptic and visual information was provided by disparity in spatial orientation between hsptic and visual displays of an object. The sub~ectwas instructed either to use the haptic information or to the visual information to complete the task. This experiment allowed us to further understand how humans use haptic and visual information for object manipulation. Experiment 4 examined other aspects of object manipulation in virtual environments: contextual constriints. Two kinds of contcxtuid constraints were provided: a haptic (mechanical) constraint lor the hand movement and n visual (graphic) background for the display. This experiment identified the role of contextual haptic and visual constmints on human performance in virtual envimnmcnts. All four experiments were carried out using the Virtual Hand Laboratory at Simon

Fraser Univenity with the model and methodology described in previous sections. Research hypotheses m d experimental design will be discussed in detail for each experiment in subsequent chapters.

Chapter 3 Experiment 1:

The Structure of Object Transportation and Orientation

3.1 Introduction Object trmportation and orientation are common tasks in virtual environments. One imponant question is, what is the relationship between the object trmsponation and orientation processes? Experiment 1 addressed this question by exploring the structure of object transportation and orientation by the human hand in a virtual environment. Two streams of research are related to the structure between object transportation and object orientation. but provide no ready answers. One stream of research is on the structure of interxti ve graphic manipulation in HCI. derived from the perceptual structure of visud information (Gamer, 1974; Jacob et al., 1994);the other is research on hand prehension in human motor control (Jeannerod. 1984). The notion of integral and separablestructure by Jacob et al. (1994) is not automatically applicable to multiple dimensional object manipulation including object trinsportnion and orientation by the human hand. Thc original notion of integral md separable structure by Gamer only deals with intrinsic perceptual properties of m object. such as the size and color (Gamer.

1974). The location and orientation of an object are extrinsic properties (Jemnerod, 1984). Jacob et a l h (1994) framework has not explicitly addressed the relationship

between object transportation and orientation. Some researchers observed that subjects could achieve simultaneous object transportation and orientation using a six-degreesf-

freedom controller, while others found that it was rdther difficult for subjects to transport and orient an object at the same time (Ware, 1990; Zhai and Milgrim. 1993. 1997). It is mgunble whether or not a structure in a perceptual space (Garner. 1974) can be extended to an interactive space (Jacob et ill., 1994). The "two independent visuomotor channels" hypothesis by kannerod (1984) was developed originally for the phase of prehension before the hand makes contact with a target object (MncKenzic and Iberall, 1994). Even though Jeonnerod's hypothesis is supported by clcctrophysiologica1. neuropsychological and pschoph ysical evidence (Paulignan md Jeannerod, 1996). it may be inappropriate to extend Jeannerod's hypothesis to the relationship between object transportation and orientation for m object

in hand. Human prehension with an object in hand can be very different from reaching and grasping prior to contact with an object (Soechting and Flanders 1993; Desrnurget et ul., 1996; Goodulc, Jakobson md Servos. 1996; Soechting. Tong and Flanders, 1996).

In fact. the assumptions underlying the theoretical framework by Jacob et al. (1994) and the "two independent visuomotor channels" theory by Jeannerod (1984) appear to point in opposite directions. Based on the notion of integral and sepurible perceptual structure, object transportation and orientation could be integrd because the spatial attributes are generally considered integral (Garner, 1974; Jacob et ol.. 1994). On the other hmd, the hypothesis of "two independent visuomotor channels" suggests that the control of object transportation and orientation may be separable (Jeannerod, 1984). Results regarding the relationship between object tnnsportation and orientation are not consistent across the two streams of research.

We used the structural model and methodology described in Chapter 2 to examine the relationship or structure between object tnnsportation and orientation processes in

terms of concurrence and interdependence. Concurrence indicates the relationship between the time courses of the two processes. Interdependence reflects the interiction between the object transportation md orientation processes. This experi ment was designed to test the following hypotheses: I. Object transportation and orientation have a par~llelstructure;

2. Object transportation and orientation are interdependent; 3. Object trinsportation and orientation processes have different effects on object munipuliition and one process may dominate the other: 4. Visual feedback information improves human performance for both object

transportation and orientation.

3.2 Method

3.2.1 Subjects Eight university student volunteers were each paid $20 for participating in onc, two-hour

experimental session. All subjects were right-handed, and had normal or corrected-tonormal vision. Subjects all had experience using r computer, Informed consent was provided before the expri mental session.

3.2.2 Experimental setup The Virtual Hand Laboratory (VHL)setup was used for this experiment. As described in Chapter 2, the VHL provided a stereoscopic. head-coupled virtual environment for the subject (see Figure 2.3). Only the physical object and graphic target were used for this experiment. The physical object was a wooden cube of 30 mm with three IREDs on its

top. The graphic target was a winframe cube of 30 mm which appeared to the subject to

be sitting on the table surface. The target cube was randomly generated at one of three

distances along the direction of the body midline of the subject and one of two angles clockwise about a vertical axis. The target distances were 30 mm, 100 mm or 200 mrn

uwuy from the start position, md the angles were 22.5 or 45 degrees.

The light under the mirror was used to control the visuul condition. As shown in Figure 3.1 A, when the light was on, the subject could see the had, the wooden cubc md the gmphic target. When the light was off, as shown in Figure 3.1B. the subject could only see the graphic target, with no vision of the hand and the wooden cubc. A black background was displayed on the computer screen. This experiment was conducted in semi-dark room.

A. Light on

Mirror

B. Light off

Figure 3.1. The Virtual Hand Laboratory setup for Experiment 1. The target is a graphic cube (dashed line). The cube (solid line) in the hand is

r wooden cube as the controller. Figure A shows when the light under the mirror is on, the subject can see the wooden cube and the hand. Figure B shows when the light under the mirror is off, the subject can only see the

graphic target cube with no vision of the controller and the hand.

3.2.3 Procedure

The task was to move and match the wooden cube to the location and orientation of the target cube as fast md accurately as possible. To start a trial, the subject pressed the mouse left button, with the left hand: this generited the graphic target cubc at one of three locations with one of two orientations. The subject then moved md matched the wooden cube to the target. When the subiect was satisfied with the match. hehhe pressed the mouse middle button to end that trial. Trials were blocked by two visual conditions: with visual feedback (the subject's hand md the wooden cubc were visible as well as the

graphic target), or without visual feedback (only the griphical target was visible). Hereafter, these are referred to as the vision and no vision conditions respectively. Four subjects started with the vision condition: the other four started with the no vision condition. The order of target locations and orientations were rmdomized within a block. For each experimental condition. 15 trials were collected. At the beginning of the session, subjects were given two trials of practice for each experimental condition.

3.2.4 Data analysis Data were treated and malyzed as described in Chapter 2. Temporal dependent measures were task completion time (CT), transportation time (TT), orientation time (OT), and the

relative time courses of transportation and orientation processes. Spatial errors of object tmsportrtion and orientation were measured in terms of constant erron and variable errors, including constant errors of distance (CED), constant errors of angle (CEA), variable errors of distance (VED) and variable errors of angle (VEA). Analysis of variances (ANOVA) was performed on the balanced design of 2 visuul conditions x 3 target distances x 2 target angles with repeated measures on all

three factors. The effects of target location and orientation were also examined under each visual condition. Two-way ANOVAs were performed, separately, with or without vision of the hand and the wooden cube.

3.3 Results

3.3.1 Temporal measures The structure of object transportation and orientation was examined in terms of concurrence and interdependence. Within each, wc first discuss the human performance where visual feedback of the hand and wooden cube was available. Following this, the results of the object manipulation are repomd where visual feedback of the hand md the object was unavai lable. Finally. a comparison between the two visual conditions is made.

3.3.1.1 Concurrence

Concurrence with vision of the hand and obiect Overall. task completion time (CT) in the vision condition had an averdge value of 776

ms. The average trimsportation time (TI') was 766 ms in the visual feedback condition. only 10 ms shorter than the total task completion time (CT). The average orientation time (OT) was 479 ms in the vision condition. 297 ms shorter than CT. Apparently, the average CT was much less than the sum of the average IT and OT. The concunence of the time courses between two processes in the vision condition i s shown in Figure 3.2. Experimental results clearly demonstrated that object transportation and object orientation were processed in parallel in the time domain. In general, object manipulation first started with the object transportation process. After a very short period, an average of 30 ms, the orientation process stated, joining the

transportation process. The simultaneous execution of the two processes remained for an avemge period of 479 ms until the orientation process finished. The trilnsportrtion process continued another 257 ms on iiverige and object manipulation ended. Statistics showed r significant earlier start (F (1.7) = 10.27, p < .00l) and later end (F (I. 7) = 186.26. p < ,001) of the transportation process than the orientation process. The trmsponation process took totally 287 ms longer than the orientation process (F ( I. 7) = 155.27. p < .00 1). The total task completion time was mainly determined by object transportation, that is, the transportation process was the critical path. Thc pmullel structure of object transportation and orientation was stable over all

cxpcrimental conditions. However, P detailed ;midysis showed that the overlapping portion of the two processes changed with experimental conditions. The orientation process took longer as the target distmce increased (F(2, 14) = 15 S6, p < .001 ) or the target angle was larger (F(I.7) = 76.26, p < .001). The difference in the starts of the two processes increased with the target distances (F (2, 14) = 7.96, p c .O I), but decreased with the target angle (F ( I , 7) = 8.96, p < .05). Subjects started the orientation process a little later when the target distance was longer. Similarly, the difference in the ends of two processes increased with the target distances (F (2. 14) = 39.34, p < .001). The fact that the difference in the ends decreased with the target angle (F ( I , 7) = 37.85, p < .001) may be due to the longer time needed to orient the object of 45 degrees than 22.5 degrees.

0

200

400

600

800

to00

Time (ms)

Figun 3.2. Time courses of object transportation and orientation processes with vision of the hand and wooden cube. White bars indicate

the transportation time course. Dark b m indicate the orientation time course.

1200

Transportation Orientation

122.5 deg.

e m

I

m

I

1

200

400

600

800

1000

1

1200

Time (ms)

Figure 3.3. Time courses of object transportation and orientation processes with no vision of the hand and wooden cube. White bars indicate the transportation time course. Dark b m indicate the orientation time course.

Concurrence with no vision of the hand and obiect The structure of object manipulation under the no vision condition was similar to that under the "natural vision" condition in terms of the concurrence between the two processes (Figure 3.3). The transportation and orientation processes were executed in parallel. The orientation process was contained within the trmsportation process, that is, the object trmsponation started earlier and finished later than the object orientation. The difference in the starts of two processes between TT and OT increased significantly with the target distance (F(2. 14) = 16.01, p c ,001). The difference decreased significantly with the target angle (F(1, 7) = 40.86,p < .001). The difference in the ends of the two

processes between TT and OT increilscd significantly with thc target distance (F(2, 14) = 10.17, p < .O 1 ). The difference decreased significantly with the target angle (F(1.7) = 13.22. p < .O 1 ).

Effects of vision conditions Effects of visual feedbilck on objcct manipulation were examined with pooled data over two vision conditions. An ANOVA was performed with repeated measures on vision conditions, target distances and target angles. Overall, visual conditions had no main effects on CT,TT and OT. However, there were interactions between visual conditions and target distances on CT (F(2, 14) = 5.10, p < .05),TT (F(2, 14) = 4.39, p < .05) and

OT (F(2, 14) = 6.07, p < .OS). The difference in the concurrence between the two vision conditions can be generally examined by comparing Figure 3.2 and Figure 3.3. Deprivation of vision of the hand and the object significantly delayed the stat of the orientation process relative to the start of the transportation process, F( I , 7) = 8.05, p < .05. The average difference

between the starts of the two processes increased from 30 ms in the vision condition to 64 ms in the no vision condition. Vision had no significant effects on the difference between the ends of the two processes.

3.3.1.2 Interdependence Interdependence with vision of the hand and obiect During object manipulation, the target distance was assumed to be the input for the trmsportation process with the output of IT,while the target angle was the input for the orientation process with the output of OT. As shown in Figure 3.4, it was not surprising that TT increased significantly with the target distance (F (2. 14) = 65.25, p < .001), and

OT increased significantly with the target angle (F ( I , 7) = 76.26, p < .001). However. it was found that the input for each process affected the output of the other process. Figurc 3.4 shows that changes in the trrget distance had a significant effect on the OT (F (2, 14)

= 15.56, p < .001 ). while changes in the trrget angle affected the TT significantly (F ( 1. 7) = 7.5 1, p < .05). As a generil trend, both TT and OT increued as the requirement of either object transportation distance or object orientation angle increased. Accordingly. the total task completion time (CT)increased significantly with both target distance (F (2. 14) = 66.46. p < ,001) and trrget angle (F (1, 7) = 10.89, p < .05). It seemed that the

effects of target distance were more pervasive on the OT rhrn vice versa. Object tnnsportation and orientrtion processes were thus interdependent.

1

r i i r u a l F e e d b a c k Condition

300

N o Visual Feedback Condition

I

I

30 mm

I

100 mm

200 rnm

V I S U ~ Feedback I~ Cond~t~on

,

,

N o V~sunF l ccdback Cond~twn

I

Visual Fcedbsck Conditron

I I

'

-22

5 dcg

+J5

dcg

N o Visual Fccdback Condition -22.3 +45

Figure 3.4. Task completion time (CT).transportation time ('IT)and orientation time (OT) with visual feedback conditions.

dcg Jcp

,

lnterde~endencewith no vision of the hand and obiect Similar effects were found in the "visuully impoverished" condition (see the right column of Figure 3.4). Object manipulation completion time. CT,increased with the target

) the target angle (F(1,7) = 9.10, p < .05). Both distance (F(2, 14) = 101.1 1, p < .Oland processes contributed to the CT.but the trinsportation process was the criticul path to determine the CT.

TT increased with the target distance (F(2.

14) = 103.00. p < .001)

and the target angle (F(1. 7) = 8.43, p < .05). The objcct orientation affected the object transportation. showing the interdependence of TT on the target angle. Both the target distance (F(2. 14) = 30.10, p < .OO 1) and the target mgle (F(1, 7) = 42.29, p < .001 ) had main effects on OT. OT as an output of the object orientation increased with the objcct target distance. An interaction bctween the target distance and the target angle was also found (F(2, 14) = 8.36. p < ,001). The difference in OT between the two target angles seemed to increase with the target distance.

OT thus not only depended on the

orientation process, but also depended on the transportation process. The interdependent structure of two processes was persistent over visual conditions.

3.3.2 Spatial errors

3.3.2.1 Constant errors Constant errors of distance (CED) With vision of the hand and wooden cube, the average CED overshot 1.5 m m over the target, F(1.7)

= 5.75, p c .05. as shown in the upper-left graph of Figure 3.5. I t appeared

that CED was particularly small, 0.4 mrn at the distance 30 rnm and 22.5 degree angle of target location. When vision was not available, the average CED was 14.5 mm,but this

increase was not statistically significant (the upper-right graph of Figure 4.5). Neither target distance nor target angle had effects on CED with the no vision condition.

Constant errors of iinele (CEA) Subjects under-rotated an average CEA of 7.5 degrees when the hand and wooden cube were in view. F(l.7) = 7.58, p c -05. shown in the bottom-left graph of Figure 3.5. CEA increased from 1.9 degrees at 22.5 angles to 3.1 degrees at 35 degrees F( 1, 7) = 6.06. p < .05. With no vision, the average CEA was 6.6 degrees, F(1, 7) = 27.31, p < .001. as shown in the bottom-right graph of Figure 3.5. There was an interaction between the target distance and angle, F(2, 14) = 5.75. p < .05. The largest CEA of 14.1 degrees occurred at the turget distance 30 rnm and angle 45 degrees while CEA showed the smallest value of I. 1 degrees at the target distance 200 mrn and angle 22.5 degrees.

Effects of vision conditions

There was a three-wry intewction on CED among vision condition, target distance and target angle, F(2, 14) = 3.78, p < .05. CED increased with no vision of the hand and the physical object, and the largest value of 2 1.6 mm took place at the target distance 200

mm md angle 45 degrees (the upper-right gwph of Figure3.5). A three-wuy interaction was found for CEA, F(2. 14) = 3.95. p < .05. No vision resulted in an increase in CEA, particularly at the target distance 30 mm and angle 45 degrees, us shown in the bottomright graph of Figure 3.5.

Visual Feedback Condition

I

N o Visual Fcedbilck Condition

+45 dcg.

.. - - - ,

T

Visual Feedback Conditwn

No Visual Feedbuck Condition

Figure 3.5. Constant errors of distance (CED) and constant errors of angle (CEA) with visual feedback conditions.

3.3.2.2 Variable errors Variable errors of distance WED) With vision, the average VED was 1.5 mm, but neither torget distance nor target angle had effects on VED, as shown in the upper-left graph of Figure 3.6. Under the no vision condition, VED had an average value of 1 1.7 mm (the upper-right graph of Figure 3.6). With no vision. VED increased with the target distance (F(Z, 14) = 5.72, p < .05).8.2 mm at distance 30 mm, 12.7 at distance 100 mm and 14.1 mm at distance 200 mm.

Variable errors of angle (VEA) With vision of the hand md object, VEA was 2.4 degrees on average (the bottom-left

gnph of Figure 3.6). VEA had a value of 2.7 degrees at distance 30 mm, slightly but consistently larger than 2.1 degrees at 100 mm and 1.2 degrees at 700 mm, F(2, 14) = 6.55, p < .00 1. VEA increased from 2.0 degrees at the 22.5 degree target angle to 2.7 degrees at the 45 degree target angle (F( 1, 7) = 18.46,p < .O 1 ). VEA was 3.3 degrees on average without vision (the bottom-right graph of Figure 3.6). Without vision, the targct angle had main effects on VEA (F(1.7) = 37.29. p < .Wl). 2.8 degrees at the torget angle of 22.5 degrees md 3.9 degrees at the target angle of 45 degrees.

Effects of vision conditions Vision condition had main effects on VED. F(1, 7) = 63.38. p < ,001. VED increased dramrticully when vision of the hand and object was removed, from 1.5 mm to 1 1.7 mm. as shown in the upper graphs of Figure 3.6. Vision also interacted with the target distance to affect VED (F(2, 14) = 5.30, p < -05). VED was especially large, 14.1 mm. at

the target distance 200 m m with no vision.

--- - ---

i

Visual Feedback Condition

N o Visual Feedback Condition

I

--

---

---

--

?--

30 rnm I

100 mm

-.

-

-

--.

N o Visual Feedback Condition

Visual Fee Jblrck Condition

I

---.

LWmml j 1

30 mm

100 mm

Figure 3.6. Variable errors of distance (VED) and variable erron of angle

(VEA) with visual feedback conditions.

200 mrn1 I

The vision condition showed main effects on VEA (the bottom gruphs of Figure 3.6). VEA increased from 2.4 degrees with vision to 3.3 degrees without vision (F(I,7)

= 12.65, p < .01), even though the amount of increase (less than one degree) was not as dramatic as VED. An interaction was also found between the vision condition and the tmget distance, F(2, 14) = 5.46.p < .05. VEA increased most at the target distance 100 mm. from 2.1 degrees with vision to 3.7 degrees without vision.

3.4 Discussion 3.4.1 The structure of object transportation and orientation

Concurrence The results demonstrated a prriilel structure of object trmsportntion and orientation. supporting our research hypothesis. The total task completion time was less than the sum of object transportation time and orientation time. There was a lurge portion of ovcrlap

between object transportation and orientat ion processes where object manipulation cut across the object transportation dimension md orientation dimension simultaneously. showing a Euclidean distance in the space. In this sense, object transportation md orientation secmed to have chmcteristics of an integral structure, according to the notion of Jacob et al. (1994). However. our results also indicated that even though object transponotion and orientation processes were in pmllel, they were not completely overlapped from the beginning to the end. Usually object orientation started r little later and completed earlier than object tnnsportation; the final phase of movement consisted of only object transportation. On avenge, the time course of object transportation contained that of object orientation, that is, object transportation dominated object orientation. This

evidence made object transportation md orientation distinct and identifiable. and therefore suggested a separable structure based on the definition of a perceptual structure (Garner. 1974; Jacob et nl., 1994). As recognized by Jacob et rl. (1994). "Integml and separable define two classes of perceptual structure that mark the endpoints of a continuum rather than forming a sharp dichotomy". The interpretation of the mechanism underlying the parallel structure of object trisponation md orientation has to be extended beyond the notion of integrill and separable. We attribute our results to the structure of visuomotor control rather than only the perceptual structure of visual information. Object manipulation does not involve visual information alone; therefore, the structure of tasks cannot be dictated by only visual information. Indeed, our results show that the vision condition of the hand and the physical object interacts with the target attributes jointly to affect object manipulation performance. Haptic and kinesthetic information have a strong role to play in object manipulation tasks as well. Human separible visual systems for perception and action imply that the structure of an object in a perceptual space may not be the same one in an interactive space (Goodale et al.. 1996). Object manipulation as a goal-directed movement should also take into account the attributes of the target. All information. including the visual display of the task environment and the manipulator. may be relevant for determining the structure.

The notion of concurrence addresses not only whether object transportation and orientation occur simultaneously, but also identifies where and when each process starts and ends. This allows us to explore subtle but important differences in the structure of

object transportation and orientation. Obviously, a parallel structure is more efficient than a serial one in terms of the task completion time. To achieve a parallel structure of

object manipulation, subjects have to coordinate two processes in the temporil domain. It wus interesting to note that the difference between the starts of transportation and

orientation was very shon, 30 ms in the vision condition, but consistently increased with the target distance. This observation was unlikely to be o result from an on-line adjustment after object manipulation stmed because the time was too shon for a feedback adjustment (Fitts and Posner, 1967; MacKenzie and Marteniuk. 1985). A possible interpretation is that subjects formed a plan to start the orientation process earlier if the trilnsportation process would be shorter so as to achievc an efficiently parallel

structure. This interpretation is consistent with the fact that the orientation process started earlicr when subjects anticipated a longer object orientation. It seemed that there was a need to allocate enough time for on-line correction on object transportation in the lust phase of the movement. Evidence showed that the timc course of one proccss of object manipuIation was planned in coordination with that of the other process. In conclusion, object manipulation is a unitary visuomotor output with a coordinated control for object transportation and orientation.

Evidence from this study does not support the extension of Jcannerod's (1984) "two independent visuomotor channels" hypothesis to the structure of object transportation and orientation for an object held by the human hand. In contrast, our results showed a strong interdependence between the object transponation and orientation processes. The object transportation time depended on not only target distance, but also target orientation angle.

and vice versa. These results ;ue consistent with recent research on hand transportation and orientation (Soechting and Flanders 1993; Desmurget et rl., 19%). This indicates

that even though the spatial states of object translation and rotation can be described separately within a coordinate system, the two processes of object trinsportation md orientotion executed by the human hand are interdependent. Note that Jeunnerod's (1984) empirical data for grisping was based on the grasp aperture. not the orientation of

the grasp. Object size (an intrinsic property) affects the grisp aperture but object orientation (an extrinsic property) affects both transportation and orientation of the hand. This is an important distinction, both for motor control and HCI researchen.

It was evident that the increase in object transportation requirements extendcd the object orientation time, while il larger object orientation resulted in a longer object transportation. However, the two processes did not affect each other evenly. The transportation process appeared to have more significant effects on object manipulation

than did the orientation process. Evidence showed that the transportation ti me course contained the orientation lime course so that +ITwas a determinant of CT. Quite a long time was allocated Tor only transportation during the last phase of object manipulation.

TT was the critical path for object manipulation with the two processes. It was interesting to note that the input for each process genedly affected the viviable errors of only that process. As shown in Figure 3.5, VED was affected by target distances but not target angles, md VEA was affected by the target angle but not the target distance. It appeared that in tenns of the consistency of human performance, the

accuracy control of object transportation was independent from that of object orientation. This confirmed the results reported by Soechting and Flanders (1993).

3.4.2 Effects of visual feedback information

In general, the structure was similar under the two vision conditions in terms of the concurrence and interdependence between transportation and orientation processes. Whether visual feedback for the hand and the wooden cube was present or absent, the transportation time course always contained the orientation time course, and the two processes were interdependent. In other words, the parallel and interdependent structure of object trinsportat ion and oricntation was pmistent across the visual feedback conditions. One possible explanution is that, given the target location and orientation. the structure is already programmed before thc stan of the movement. This finding deserves further investigation. Deprivation of visu~lfeedback of the object and the hand increased spatiill erron of object trinsportation and orientation. The magnitude of the human bias indicated by constant errors in object manipulation was larger in the no vision condition. Human performance consistency, shown in variable errors, decreased with reduccd visual feedback in the virtual environment. The effects were more dlumrrtic on the tnnsportation errors than the orientation errors in terms of variable errors. This suggests that humans may rely more on visual information for accuracy control, especially for object trimsportation process. Combined with the above findings regarding temporil outputs, it is reasonable to postulate that the speed of object munipulution may depend more on haptic information. No hapric condition was changed in this experiment. The

following experiments will further explore this topic.

5-43 implications for HCI design

Human-computer interfaces should be designed to accommodate the parallel, interdependent structure of object manipulation. Constriiints or intenuption of the integration of object manipulation may result in structural inefficiency. For example, in the case of a porallcl structure of transportation and orientation processes being trunsfoncd into a serial structure, the total task completion time may increase significantly even though the completion time for each process remains the same. At the same time, if the main goal of interfxe design is to achieve "naturalness" or realism, like some vinuri reality applications, retaining the natural structure of human object

manipulation will be particularly important. Understanding the hand can be beneficial for evaluating and designing input devices, especially multiple dimensional pointing devices. This study shows that the orientation control can be totally integrated with the trinsportation control, and the transportation control is the critical palh for task completion. These features of hand prehension should be carefully considered for input device design.

3.5 Conclusions We conclude from Experiment I : 1. Object transportation and orientation have a parallel. interdependent structure.

1. The object transportation process temporally contains the object orientation process. 3. The structure of object transportation md orientation is generally independent of visual fecdback conditions.

4. Lack of visual feedback information increases spatial errors of object manipulation,

especially in the transportation pmcess.

Chapter 4 Experiment 2: The Effect of Controller, Cursor and Target Sizes

4.1 l n troduction Object manipulation tasks in human-computer interaction (HCI) generally involve three

objects: a controller. a cursor and n target. A controller i s an input device such as a mouse manipulated by the human hand. A cursor is a graphic object on a display driven by and spatially mapped to the controller's movement. A target is r graphic object such as an icon on the display that defines some aspects of the goals of an object manipulation task. In a typical object mmipulation task, a user controls an input device to move o cursor to a target. One common spatial property of the controller. cursor and target i s their sizes. which can have significant effects on human object manipulation performance. The objectives of this study are to investigate how the sizes of controllen, curson md targets affect human performance in object manipulation and to provide further understanding for hurnm-computer interface design. Effects of target size in HCI have been extensively studied in light of Fitts' law and findings have been successfully implemented in human-computer interface design

(Fitts, 1954; Graham and MacKenzie, 1996; MacKenzie, 1992). It is generally concluded that movement time increases with decreases in target size during a pointing task. Most

previous HCI studies on target size used the same input device und a cursor of constant size, md were limited to two-dimensional pointing tasks (Fitts' tasks). Kabbash and Buxton (1995) conducted a study to compare the use of an area cursor with a typical "point" cursor for a two-dimensional selection task. In their experiment, the areacursor was a large rectangular areaand the point cursor was r small circular dot. Their results showed that the area cursor had effects similar to those of the target size on task performance. Since the size and shape of the cursor and target changed together for experimental conditions, it was not clear whether their results were due to the compound effect of cursor size and shape or the effect of cursor size alone. Modem computer systems such as virtual rcaliry systcms ususlly require multiple dimensional object mmipulation, c.g.. graphic object docking and tracking. Relatively few studies on human performance have been conducted in multiple dimensional

environments. Zhai. Milgwm, and Buxton (1996)examined human performance on multiple dimensional object manipulation by comparing two, six degrees of freedom

(DOF) input devices, one attached to the palm, the other manipulated by the finger. They suggested that the size and shape of input devices should be designed to allow better performance through finger manipulation. Some studies found that it was rither difficult to control all dimensions simultaneously,depending on the specific task and interface systems (Ware, 1990: Zhri and Milgram, 1997). However. Experiment I showed that users performed simult~neous control of object transportation and orientation. It was found that object trmsponation and orientation had a parallel and interdependent structure that was persistent over visual conditions. We are unaware of any study that examined the size effects of controllers, curson and targets altogether on object manipulation in virtual environments.

Experiment 2 was conducted to investigate the effects of controller, cursor, and target sizes on object trunsportotion and orientation in a virtual environment. This experiment was designed to test three research hypotheses: The first hypothesis is called the "relative size hypothesis". It is predicted that there

are strong interactions among controller size, cunor size and target size on object trinsportation and orientation. The second hypothesis i s the "same size hypothesis". Specifically. when the sizes of a controller and a cunor are the same. the heptic information of the controller size is

consistent with its griphic representation, the cunor. Consistency between haptic and visual information facilitates human object mimipulrtion. The same size of cunor

and target provides strong visual feedback for a matching task. It is predicted that human performance will be better, in terms of a fister task completion time and fewer spatial errors, when the sizes of the controller, cunor and target are a11 the same. Thirdly, we predict that object trmsportotion and orientation have a parallel, interdependent structure. We expect that the same structure seen in Experiment 1 is retained. regardless of the controller, cursor and target size.

4.2 Method

4.2.1 Subjects Eight university student volunteers were paid $20 for participating in r two-hour experimental session. All subjects were right-handed, had normal or corrected-to-normal vision, and had experience using r computer. Informed consent was provided before the experimental session.

4.2.2 Experimental setup The some Vinual Hand Laboratory (VHL) setup for Experiment 1 (Figure 3.1) was modified for this experiment (Figure 4.1). The VHL provided a high fidelity system where display space wiis superimposed on the hmd's workspace. Like Experiment 1, the subject had a real time, stereoscopic, head coupled view, provided by the OPTOTRAK system. SGI computer system and CrystalEYES goggles. Unlike in Experiment I , a graphic cursor was presented in Experiment 2. The

cursor was a six DOF wireframe graphic cube driven by the three IREDs on the top of the controller. Thc controller was a hollow. plastic cube with two sizes. Only in this experiment. plastic cubes were used to minimize weight differences. The graphic cunor

cube was calibrated to be superimposed on the bottom center of the controller cube. The target was a wireframe graphic cube that appeared on the table surfice to the subject. The target cube was located 100 or 200 mm away from the starting position in the midline of the subject's body. The target cube was presented to the subject either 0 or 30 degrees clockwise about a vertical axis. Two sizes of controller, cursor and target cubes were used. 10 mm and 50 mm, t e m d small and large respectively.

In Experiment 2, the light under the mirror was always off. The subject saw the target cube and the cunor cube presented on the mirror, but was unable to see the plastic controller cube and the hand under the mirror.

Mirror

Figure 4.1. The Virtual Hand Laboratory setup for Experiment 2. Shown

in schematic are luge controller (solid line), small cursor and large target

(dashed 1 ine),

4.2.3 Procedure Prior to each experimental session, the workspace on the table surface and the cursor cube position relative to the controller cube were crlibrrted, as described in Chapter 2

(see Summers et id., 1999). Individual subject eye positions were also calibrated to provide a customized. stereoscopic, head-coupled view.

The task was to much the location and orientation of the gruphic cursor cube to those of the target cube as fist and accuritely as possible. When the cursor size was different from the target size, the subject was asked to align the cursor cube and target cube at the bottom center so that the plastic controller cube would finish on the table surface for all experimental conditions. To start a trial. a target cube appeared at one of two distances with one of two angles. Then, the subject moved thc controller so that the cursor matched the target's location and angle as quickly and accurately as possible. When the subject was satisfied with the match, he/she held the controller still and said "OK". The experimenter controlled the timing of the srnn and end of trials by pressing n mouse button. Trials were blocked on the controller size and the cursor size. In each experimental condition, 10 trials were repeated. At the beginning of each block of trials. subjects were given 20 trials for practice. Target size, distance and angle were rindomly ordered over trials. Trials with zero target angle enabled randomization of the target angle, thus minimizing subject's anticipation of the target angle during the experiment.

Only data from trials using the 30 degree target angle were analyzed, so that r complete set of object orientation measures, in correspondence to transportation measures, could be presented.

4.2.4 Data analysis

Object transportation and orientation data were derived from OPTOTRAK 3D position data collected from two REDS on the top of the controller cube. Independent variables for this experiment were controller size, cursor size, target size, and target distmce.

object Dependent temporal measures were: total task completion time (CT). object orientation time (OT). and relative time courses of transportation time (TT). transportation md orientation processes. Spatial error measures were: constmt errors of distance (CED), constant errors of angle (CEA), variable erron of distance (VED), and variable errors of angle (VEA). ANOVAs were performed on the balanced design of 2 cocrrroller sizes x 2 crirsor sizes x 2 target sizes x 2 tcu-get (liStut1ces with repeated

measures on all four factors.

4.3 Results

4.3.1 Temporal measures

4.3.1.1 Relative time courses

In all experimentul conditions, object manipulation first started with the transportation process alone (Figure 4.2 and Figure 4.3). After an average of 69 ms, the orientation process joined the transportation process. Both object transportation and orientation processes proceeded simultaneously until the orientation process finished. At the last phase of object manipulation. the transpoltation process con tinued done again for an avenge of 188 ms. The dark bars in Figure 4.2 md Figure 4.3 indicate the simultmeous control time of transportation md orientation processes as well as the orientation time

(OT). In other words, the object tnnsportotion process temporally contained the

orientation process. The structure of transportation and orientation processes shown here was consistent with previous findings in Experiment I.

4.3.1.2 Completion time (CT)and transportation time (TT)

Averigc task completion ti me (CT)over all conditions was 909 ms. CT was dominantly determined by the transportation time (IT)TT . took up 97.5% of CT. Results of CT analysis were similar to those of TT data. For brevity. detailed results for TT only are presented here. It took 886 ms on average for a subject to complete the object trmsportation. There was ;Isignificant interaction between the controllcr size and cursor size (F(1.7) =

5.75, p < .05), shown in Figure 4.4. The overage 'M' was 862 ms when both controller and cunor were smrll, similar to the average value of 866 ms when both controller md cunor were large. When the controller was large and the cunor was small. TT incrc;tsed to 896 ms. A small controller and a large cursor rcsulted in the greatest average TI'of 92 1 ms. The controller size and cursor size also significantly interacted with the target distance (F(1.7) = 19.28, p < -01). It appeared that 'IT was much slower at the target distance of 100 mm with a small controller and a large cunor. These results demonstrated that it was the relative size between the controller and cursor that significantly affected TT,as predicted in our relative size hypothesis. The same size hypothesis was also supported by the data in that when the controller size and cursor size were the same, the transportation time (TT) was significantly faster.

.-

1

I

I

Tme courses of transportation and orienution processes

I

i

I

I

Time (rns)

Figure 4.2. Time courses at target distances 100 m m and 200 mm.

I

;

Time courses of transportntion and one n tation pmce sse s

I T

0

1 I

-+---t----i

2003006008001OOO

Time (m)

Figure 4.3. Time courses with small and large controller sizes. S. Con = small controller; L.Con = large controller.

-.

----.

Tmnsportation time (TT)

7 i i

Controller

i

* Controller I I

850 ;

I

! --1

Large

Cursor

Figure 4.4. Interaction between controller size and cursor size on transportation time.

I I

+Small

Target

/

+Large Targc t

Smll

Large

Cursor

Figure 4.5. Interaction between cursor size and the target size on transportation ti me.

A significant interaction for TT was also found between the cursor size and the

target size (F(1.7) = 61.85, p c .001),as shown in Figure 4.5. However, the nature of the cursor ;md target size interaction was different from the controller and cursor size interaction mentioned above. It took a longer trimsportation time when the cursor and target had the same size than when they were different. When both cursor md target were small, TT was 946 ms. and when both were luge. TT was 96 1 ms. TT was much hstcr when a cursor and a target had different sizes. 825 ms with a large cunor and a small target and 813 ms with a small cursor and a large target. These results seem to be countcrintui tive. It appeared that the subjects took advantage of the strong visual feedback presented with the same sizes of cunor and target to achieve higher accuracy than when cursor and target sizcs were different. We will refer to this point when wc examine the spatial errors later. Replicating Experiment 1, TT significantly increased with the target distmcc, F( 1.

7)= 13 1.62.p c .001. The averige TT was 786 ms at 100 mm, and 987 ms at 200 mm. No other main effects were found. Neither contrdlcr size, cursor size, nor target sizc had

main effects on the transportation timc. This clearly demonstrated that the relative sizes among the controller, cunor and target, rather than their absolute sizes alone, influenced human performance in object transportation. Note. that no significant interiction was found between the controller size and the target size.

4.3.1.3 Orientatlon time (OT)

The average orientation time (OT) was 630 ms. 7 1% of the task completion time (CI'), much shorter than the 97.5% for transportation time (IT). Overall statistics on OT data were similar to those on TI' data, but there were some differences in detail. As shown in

Figure 4.6, there was a significant interaction between the controller size and cursor size

(F(1. 7) = 20.69, p < .O 1). With both large controller and large cursor, the object orientation was filstest with rr time of 564 ms. However, when both controller and cursor were small, the average OT was 656 ms, grcoter than the overage value of 592 ms where the controller was larger than the cursor. The slowest OT occurred when o large cursor was driven by a small controller (706 ms). There was a three-way interaction among the cursor size, target size and target distance. F(1.7) = 6.20. p c .05, shown in Figure 4.7A and 4.7B. At the target distance of 100 mm. OT showed a similar cursor by target pattern as TT (Figure 4.5). At 100 mm.

with the same sized cursor and target, i t took longer to complete the object orientation (634 ms for both small and 621 ms for both large) than when the cursor size and the

target size were different (Figure 4.7A). These results may be due to subjccts' efforts to obtain a more accurate match by using the strong visual feedback when both cursor and target sizes were the same. In centrist, at the target distance of 100 rnm,when both cursor and target were large. OT (689 ms) was longer than the other three cursor by target conditions (647 - 650 ms), as shown in Figure 4.78. Target distance had a significant main effect on OT , F(1.7) = 35.90, p c .001.

OT increased with the target distance, from 601 ms at LOO rnm to 658 ms at 200 mm. Target distance can be considered as an input to the object transportation process, and therefore should have an effect on 'IT. OT, on the other hand, can be considered as on output of the object orientation process. The main effect of target distance on OT indicated that this input for the transportation process significantly affected the output of the orientation process. This result confirmed previous findings in Experiment 1 that the transportation and orientation processes were interdependent. There were no other main

effects on OT. It was the relative size that affected the object orientation process. Similar to TT.again, there was no interaction between controller size and target size on

OT,

Orientation time (OT)

+Large ControUe r

1

Cursor

Figure 4.6. Interaction between controller size end cursor size on orientation time.

--

I

Orientation time (Ona IWmm

+Stmll Trrrgct -a- Large 1I Target ! I

Large

Srmll

I

Cursor -

-

-

--

Figure 4.7A. Interaction between cursor size und target size on orientation

time at target distance 100 mm

!

Orientation time (OT) at 200 mm

I

I

Targt r

I

I

+Large

I

I I

' 650 -f

Targc t

Cursor

Figure 4.78. Interaction between the cursor size and the target size on orientation time at target distance 200 mm.

4.3.2 Spatial errors 4.3.2.1 Constant errors of distance (CED) and constant errors of angle (CEA) The average value of constant erron of distance (CED) was 1.4mm from the target distance, significantly different from zero. F(1.7)= 13.86,p < .OleCED increased significantly with cursor size, from 0.6 mm with a small cursor to 3 rnm with a large cursor. The effect of t q e t sizc was also sienificmt with F( 1. 7)= 14.92, p < .OI . CED was 2.1 mm with a small target, reduced to 0.6 rnm with a large target. No other main effect or interaction was found. On average. constant angle error (CEA) was 1. I degree under-rotated, but this was not signi ticant.

4.3.2.2 Variable errors of distance (VED)

The overall rvcrrgc VED was 2.1 rnm. Both controller size (F( I. 7) = 6.30,p < .05) and cunor size (F( 1.7)= 15.1 1, p < .O 1 ) had significant main effects. VED increased from

1.9 mm to 2.2 mm with increases in controller size, and from 1.8mm to 2.3 mm with increases in cursor size. An interaction among controller size, cunor size and target distance was found,

F(1,7) = 7.06.p < .05. As shown in Figure 4.8A. when the controller md cunor were both small, VED was smallest at the 100 mm target distance. In contrast, Figure 4.88 showed that rt the 200 m m target distance, VED was largest when both controller and cursor size were large. Combined with the transportation time results, it appeared that object transportation was fastest and most accurate when both controller and cursor were the same small size.

There was a significant interaction between the cunor size md the target sizc, F(1,7) = 55.76, p < .001. VED was smaller when the cunor and target had the same size

than when they were different. This showed that subjects took advantage of the strong visual feedback available when the cursor md target sizes were the same to achieve high accuracy. However, there was also a three-way interaction among controller size, cursor size and target size. F( 1. 7) = 7.34, p c .05. As shown in Figure 4.9A and 49B.VED was particularly large for a large controller. a large cunor and a small target, with a value

of 3.5 mm compared to the average VED of 2.1 mm. This was the only time we found n

three-way interaction among the controller, cursor and target sizes. No other interwtions were found between the controller size and the target size in this experiment.

3.3.2.3 Variable errors of angle (VEA) The average VEA was 1.1 degrees across all conditions. There was a significant

interaction between cursor size and targct size. F(l,7) = 8.99. p < .05. As shown in Figure 4.10, VEA was less with the same sized cunor and targct than with the different sized cunor and target. VEA was the smilllest when both the cursor and the target were large, 1.9 degrees, compared to 3.2 degrees when both of them were small. There was no

difference in VEA between targct sizes with the small cunor, but a distinct advantage for the large txget was seen when using the large cursor.

Vwiuble errors of distance (VED)

I

I

+Smll Controllc r

-m-

Large Controller

Srmll

Large

Cursor

Figure 4 8 A . Interaction between cursor size and controller size on variable ercors of distance at 100 mm.

I I

I

Variable errors of distance (VED) at 200mm

I

+Snull

4 ,

1 I SmU

t

Large

Cursor

Figure 4.8B. Interaction between cursor size and controller size on variiible errors of distance at 200 mm.

-------

7I

I I

i

I

Viuiuble errors of distance (VED) with small controller

+Srmll

I

Trrrge t

-m- Largc Target

Large

Smll

Cursor I

Figure 4.9A. Interaction between cursor size and target size with small controller on vuiable errors of distance.

with large controller 'T

+Smll

Target 1

-a- Lwge ' Target

i

I

Large

Smll

Cursor

Figurr 4.9B. Interaction between cursor size and target size with large controller on variable errors of distance.

-

r

I I

I

Variable errors fo angle (VEA)

I

+Smll I

+Large 1 Targc t i 1

I

Smll

Large

I

1

Cursor

Figure 4.10. Interaction between cursor size and target s i x on viuiable

errors of angle.

4.4 Discussion We first summarize and discuss the results in light of our research hypotheses. We then

relate our findings to implications for HCI design.

4.4.1 Relative size hypothesis

Results from this experiment supported the relative size hypothesis. As predicted, interactions among the controller size, cursor size and target size were found on d l dependent measures. In the temporal domain, there were significant interactions between controller size and cursor size as well as between cursor size and torget size for total task completion

time (CT),transportation time ('IT)and orientation time (OT). However, there were no 74

interaction for the temporal measures between controller size and target size. At the same time, neither controller size, cursor size, nor target size alone had significant effects on CT,TT or OT. The results demonstrated that it was the relative size thut mattered, mther than the absolute size of controller, cursor or target for the temporil measures presented here.

In the spatial domain, the relative size of controller and cursor ;ls well as cursor and target significantly affected variable errors of distance (VED).A thrce-way

intcriction was found among controller size, cursor size md target size. It appeared that, with the small target. VED increased more with cursor size when using the luge

controller than using the small controller. This was the only instmcc in Experiment 2 where controller size interacted with target size. For variable errors of angle (VEA). the relative size between the cunor and the target showcd significant effects. In conclusion. the relative size of controller, cursor, and target was important for spatial errors of object manipulation.

4.4.2 Same size hypothesis We expected human performance to be better when the controller size, cunor size and

target size were the same. We found that transportation times ('IT)were faster when the controller and cursor both had either small or large sizes. However, TT was slower when the cunor and target sizes were the same, either both small or large. In the case of orientation time (OT), it was fastest when both controller and cursor were large. However, OT with the small controller and the small cursor was not as Pist as that with the luge controller and the small cursor. For the interaction between cursor size and target size, the same size resulted in a slower OT.

For spatial erron, VED was smaller when the controller and cursor were both small. VED was also smaller when the cursor and target were the same size, small or Irrge. VEA had the smallest value when the controller size md cursor size or the cursor size and target size were both large.

In general. the above results indicated that the same size of controller and cursor facilitated object triinsportation and orientation processes in terms of faster TT and OT. On the other hmd. the same size of cursor and target helped accuricy in terms of less

VED and VEA. However. it took extra time (TTmd OT) to reduce VED and VEA by taking advantage of the strong visual feedback presented by the same sized cunor and target, indicating a speedhccuracy tradeoff.

4.43 The structure of object transportation and orientution

Results from this experiment supported our hypothesis, extending the findings of Experiment 1 that object transportation and orientation have a parallel, interdependent

structure. The relative sizes of controller, cursor and target had significant influcncc on individual temporal and spatial measures of object trinsponadon and orientation processes. but did not change the overall structure of the two process. As shown in Experiment 1. this structure was very persistent over all haptic and visual information conditions. This hypothesis will be tested further in the last two experiments.

4.4.4 Interplay of haptic and visual information

The interplay of controller, cursor and target sizes affects object manipulation, as illustrated in Figure 4.1 1. Then is r strong interaction between the controller md cunor, and between the cursor and target, but not between the controller and target. The

matched sizes of controller and cursor fiicilitate object manipulation speed, while the same sizes of cursor and target improve accuracy, but take more time. The relative size

between the controller and target generally has no significant effect on object manipulation performance. These findings provide insight into the underlying mechanism of human performance in HCI. The cunor md the target are ob,iects in the display domain. while the controller is in the control or hand domain. Thc cursor is the link which interacts with both controller and target. The intrinsic properties of a cunor such as size and shape are presented in the display domain. At the same time. a cursor can be considered as a gnphic representation of the controller. The extrinsic properties of a cursor such as location md orientation are determined by the controller in the control domain. In contrast. both intrinsic and extrinsic properties of a target arc in the display domain. while both intrinsic and extrinsic propertics of a controller are in the control domain. We suggest that it is the nature of domain sepmtion belween the intrinsic and extrinsic properties of r cunor that makes it unique: bridging between the controller and the target. Neither controller nor target have properties across mother domain besides its own. There may be different reasons for human performance improvement in speed and accuricy in the some size conditions. The fast object transportation md orientation

processes of the same size controller md cunor may be due to the consistency between hrptic information of the controller and visual information of the cursor, that is, what subjects feel is consistent with what they see. As discussed previously, the performance improvement in accuracy may be due to processing of visual feedback information when the cunor and target are exactly the same size.

rn Cursor

Figure 4.1 1. Interplay of controller. cunor and target size.

4.4.5 implications for HCI design

HCI design should consider the sizes of controller, cunor and target together. rather than isolating each element. Pluticular attention should be paid to cursor properties in reletion to the controller and the target. Any moving graphic object driven by an input device cm be considered as a cursor. Therefore, the interaction of a control let. with ;r cursor or other

graphics is expected to occur in general gnphic interaction applications such as animation and gaming. The size effect of a cursor has conventionally been ignored in either input device design or gnphic design. As shown in this study, an appropriately sized cursor may significantly improve human performance in HCI.

The relative size of objects should be determined in the context of task

requirements. If speed is the main concern. attention should be paid to the controller and 78

cursor size; if accuracy is the main goal, emphasis should be directed to the cursor and target size. Small controller and cursor sizes may benefit object transportalion tasks, while larger ones may f~cilitateobject orientation tasks. A tndeoff may be achieved by closely examining the size effect to meet the specific task requirements. The size effect of controller, cursor and target should be taken into account for the

experimental design in HCI research. For example. in previous input device compiuison studies. the size of different input devices usually was not controlled during experiments or not reported in publications (Zhi, 1995; Hincklry et al., 1997). The size of input devices may actually have a compound effect with other factors such as cursor sizes. and even target sizes. Thus, caution is needed when interpreting results of studies on multiple dimensional object manipulation in virtual environments.

It is interesting to note that results from this experiment did not conform to Fitts' law (Fitts. 1954). In general, the rrsk completion time did not increase as the target size decreased; this depended on cursor size. The target size alone showed no significant

effect on either the task completion time, trmsportrtion time or orientation time. This demonstrates that multiple dimensional matching or docking tasks are not Fitts' tasks per

se. This further suggests thirt human infontion processing for multidimensional object transponation a d orientation may be different from that for pointing.

4.5 Conclusions

It is concluded from Experiment 2: 1. The relative size of controller, cursor, and target matters; human performance is a

result of the interplay between haptic and visual information during object transportation and orientation.

2. The same sizes of controller and cursor facilitate human performance in object

manipulation speed. 3. The same sizes of cursor and target improve human performance in object

manipuliition accuracy. 4. Object transportation and orientation have a par;lllcl, interdependent structure,

regardless of the sizes of controller, cursor and target.

Chapter 5

Experiment 3: The Effect of Spatial Orientation Disparity between Haptic and Visual Displays

5.1 Introduction Objects in the real world are o unitary whole, that is, haptic and visuul displays ;ur spatially consistent. Haptic displays are perceived by the hand in the control space, and include the physical charxteristics (e.g., shape and size) of an object. Visual displays of m object are perceived by the eyes. For object manipulation in the red world, what we

see is generally consistent with what we feel. Humans perform daily activities in an environment where hnptic and visual displays of objects iye completely superimposed. However, this is hitrdly the case in modem human-computer interaction (HCI). In a typical human-computer interaction setup, the control space of the hand is sepamte from the display space of the objects, where what r user feels with her hand is not the same as what she sees with her eyes. For exnmple. in a desktop HCI situation, the movement of a mouse on a horizontal plane is transformed to the movement of r cursor on a vertical screen. The cursor is a visual or graphic representation of the mouse, but with a different shape, size, location and orientation from the mouse. The mouse and the cursor are not spatially superimposed. In other words, the haptic display of a mouse is different from its graphic display, a graphic cursor (Graham and MacKenzie, 19%).

We use the term disparity in this experiment to refer to the spatial difference between haptic and graphic displays of objects. Disparity between haptic and graphic displays is im important feature that distinguishes most current virtual environments from real world environments. In humm-computer interaction applications, the graphic object being manipulated by a physical controller rarely has the same charicteristics (e.g., shape. size, location and orientation) as the controller (Ware. 1990. 1998: Zhri and Milgram. 1997; Boritz. 1998). Disparity between haptic and p p h i c displays can have significant effects on human performance in virtual environments. The purpose of this experiment is to investigate how spatial disparity between haptic and griphic displays affects human object manipulation in virtual environments. and to provide further insight into HCI design. Effects of disparity between haptic and griphic displays have been rarely studied. Experiment 2 showed the effects of relativ.~size among the controller, cursor and target on docking tasks in virtual environments. The difference between the controller and cursor sizes indicated the effects of size disparity between haptic md graphic displays. The nsult showed that human performance was better when the controller and cursor had the same size, that is, when the haptic md griphic displays were superimposed. Graham and MacKenzie (19%) conducted experiments to examine human pointing perfomrnce under different relationships between the display space md control space. Their results are related to the effects of trimslation disparity between haptic and graphic displays of objects since the graphic display of objects was tnnslated away from the hoptic displays in their experiments. They found that users generilly achieved better performance when the display space and control space wen superimposed than other conditions. A study on misalignments between the display and control axes by Ellis.

Tyler, Kim and Stark (1992) provided similar evidence. Ware noticed that object trmsportation and orientation in virtual environments were much slower than in the real world (Ware, 1990; Ware. 1998). He suggested that it could be the object orientation components that slowed down the object manipulation process in virtual environments. We an: unaware of research in the literature that system~ticallystudied the effects of orientation disparity between haptic and graphic displays on object manipulation in virtual environments. This disparity may play an important role in human performance

and is the focus of Experiment 3.

Experiment 3 was conducted to investigate the effects of orientation disparity between object haptic and graphic disphys on object transportation and orientation in virtual environments. Three research hypotheses were proposed:

I . Human performance is optimum under the no orientation disparity condition, when the hnptic and graphic displays are superimposed. When no disparity is present. humans can take advantage of their object manipulation skills easily transferring from the real world into the virtual world. 1. The orientation dispilrity between hoptic and graphic displays of an object affects not

only the orientation process, but also the transportation process. Experiments 1 md 2 showed that object trunsportation and orientation processes interacted with each other, suggesting an interdependent structure. The orientation disparity can be considered as an input to the object orientation process, but can affect the output of the object tnnsportrtion process as well.

3. Experiments 1 and 2 demonstrated that object transportation and orientation had a pamllel, interdependent structure over various haptic and visual conditions. We hypothesized that this structure persists over orientation disparity conditions.

5.2 Method 5.2.1 Subjects Eight university student volunteers were each paid $20 for participating in one. two-hour experimental session. All subjects were right-handed, and had normal or corrected-tonormal vision. Subjects dl had experience using a computer. Informed consent was provided before the experimental session.

5.2.2 Experimental setup The Virtual Hand Laboratory setup for Experiment 3 was similar to Experiment 2, as shown in Figure 5.1. There were a few differences. In Experiment 3. the controllcr. cursor and target cubes had a constant size of 30 mm. The controller was a wooden cube. The cursor and target were wireframe graphic cubes. In this experiment, the wooden cube was referred to as the physical object (or controller), the cunor cube as the graphic object. and the target cube as the graphic target. The center of the graphic object (cunor) was always superimposed with the physical object on the tabletop. I n one experimental condition, the gr~phicobject was oriented 30 degrees clockwise from the physical object around their common center. therefore generating an orientution disparity betwcen the haptic and graphic displays (see

Figure 5.1B). In another condition. the graphic object md physical object were totally aligned, with no orientation disparity.

A. Side view

B. Top view

Figure 5.1. The Virtual Hand Laboratory setup for Experiment 3. Figure A i s a side view of the setup, showing a wooden cube (solid line) and a

graphic target cube (dashed line). Figure B i s a top view of the setup. The graphic cube (dashed line) further away from the subject i s the target cube.

The graphic cube (dashed line) closer to the subject i s the griphic cursor rotated 30 degrees clockwise from the wooden cube (solid line). The subject could feel the wooden cube in hand, but could not see the wooden cube during the experiment.

The graphic target was located at either 100 mm or 200 rnm away from the starting position of the physical object. In the no disparity condition, the graphic target was oriented 30 degrees clocrwise; in the disparity condition, the tnrgct was oriented 60 degrees clockwise. This target angle arrangement guaranteed that the physical object was always required to rotate 30 degrees to match the target orientation, regardless of disparity or no disparity condition, so that the results with different disparity could be compared. There was also r condition requiring no rotation (zero degree). This condition was used to randomize the target angle and minimize subject anticipation of target angles, similar to Experiment 2. Data in the no rotation condition were not included in the analysis.

52.3 Procedure At the beginning of each experimental session, the tablc surface was calibrated nnd the relative orientation between the physical object and the graphic object was determined. As in previous experiments, subject's eye positions were crlibriited individually.

The subject held the physical object with the right hand. with the thumb and index finger in pad opposition on the center of opposing cube faces. At the starting position, the two sides of the wooden cube held with fingers were parillel to the frontal plane of the body. Subjects were asked to perform two kinds of tasks: physical match and graphic match. The physical mutch was to match the physical object to the location and orientation of the graphic target according to the hrptic informution felt with the hand.

The graphic mutch was to match the graphic object (cursor) to the graphic target bused on what they saw with their eyes. The subject was instructed to match either the physical object or the graphic object to the location and orientation of the graphic target as fast and

accurately as possible. Trials were blocked by task and dispurity conditions. Fifteen trials were repeated for each experimental condition. Target locations and angles were rindomly ordered in each block of trials. Subjects were given 20 trials for practice at the beginning of each block.

5.2.1 Data cmdysis

Independent variables for this experiment were task conditions, disparity conditions, and target distances. Dependent variables were derived from two IRED markers on the top of the physical object (wooden cube). The temporiil measures were: task completion time (CT). object trimsportation time (TT), object orientation time (OT).and relative time

courses of the transpoltation and orientation processes. The spatial error measures included: constant errors of distance (CEDI, variable errors of distancc (VED), constant erron of angle (CEA), and variable errors of angle (VEA). ANOVAs were performed on the balanced design of 2 rusk coditions x 2 dispuric conditions x Z rurger distctrtces with repe~tedmeasures on all three factors.

5.3 Results 5.3.1 Temporal measures 5.3.1.1 Relative time courses

The relative time courses between trinsponrtion md orientation processes were similar for all experimental conditions (Figure M A , 5.2B and 5.X). Subjects first strned the transportation process alone, then transported and oriented the object simultaneously, md finally finished with the transportation process alone. According to our definitions, this

was a pard lel structure when: the transpon~tionprocess contained the orientation process. These results replicated and extended previous findings in Experiments I and 2.

5.3.1.2 Task completion time (CT)and transportation time (TT)

The average task completion time (CT) across all condition was 893 ms. As shown in Figure 5.3. CT increased with target distancc. F( 1. 7) = 659.78. p < ,001. CT was 807 ms at 100 mm and 980 at 200 mm. Neither task conditions nor disparity conditions had significant effects on CT. The transportation time (TT) had an ilverilge value of 868 ms. taking up 97.14 of the task completion time (CT). Results of TT were similar to those of CT. TT increased from 776 rns at 100 mm to 959 rns at 200 mm, F ( l , 7 ) = 617.13. p < .OOl. as shown in

Figure 5.4. The disparity in orientation between haptic and visual displays had no significant effect on TT.

,---

'

ITTtTm

I

Time courses of trimsportation and orientation proce sse s

-

.+- - - -

--

0

200

J00

a

800

I

I i

loo0

Time (ms)

Figure 5.2A. Time courses at target distance 100 rnm and 200 mrn. 'IT = Transportation time; OT = Orientation time.

I

;

Time courses of tmnsportation and orientation for physical match

I 1

I I

I

j

i

Time (ms)

I L

P

-

I

Figure 5.28. Time courses for physical object to graphic target match with disparity (30Dcg.) and without disparity (ODeg.).

I

Time courses of trimsportation and oricntrtion for griphic match

0

I

200JOO6008001000

Time (ms)

j

Figure 5.2C. Time courses for graphic object to graphic target match with disparity (30Deg.) and without disparity (ODeg.).

I

i

I

Task cornple tion time (CT)

Figure 5.3. Task completion time at target distances 100 mm and 200 rnm.

Transportation time (IT)

Figure 5.4. Transportation time at target distances 100 mm and 200 mm.

S.Al.3 Orientation time (OT)

The orientation time (OT) was 612 ms on average. 70.4% of the task completion time

(CT). Distances had u significant effect on OT.F ( l . 7 ) = 56.9 I . p c .001. OT was 572

ms rt 100 mm and increased to 65 1 ms at 200 mm (Figure 5.5A). There was a significant interaction between orientation disparity and task conditions. F( 1.7) = 13.95. p c . O l . As shown in Figure 5.58. OT was the longest, 66 l ms, when the task was to match the graphic object to the target imd there was disparity between the haptic and gnphic displays. In contrast. for the other three conditions. OT values were very closc. around 600 ms. When there was no disparity, it took 601 ms to match the physical object to the target. and 598 ms to match the graphic objcct to the target. These values were closc to 586 ms when there was disparity, but the task was to

match the physical object to the target. Thus. subjects took more time to match the graphic object to the target than to match the physical object to the target, when there was disparity between haptic and visual displays. This suggested that the subjects could successfully disregard the discrepant visual information to achieve a hst orientation of the physical object. It is interesting to note that the orientation dispurity between hrrptic and graphic displays only affected the orientation process. OT, but not the transportation process, TT.

I

Onentiltion time (OT)

1

Figure 5.5A. Orientation time at target distances 100 mm and 200 mm.

Orientation time (OT)

tPhysical

tGriiphic

f 1

0 Degree

30 Dcgrcc

-

Figure 5.5B. Orientation time with tusk conditions and disparity conditions. Physical = physical object to gnphic target match; Graphic = graphic object to graphic target match; 0 Degree = no disparity; 30 Degree

= disparity.

,--I

I

___.__-_--_iAI

I

--I

Variable e m n of distance (VED) I

8,

Figure 5.6. Variable errors of distance with task conditions.

Variable errors of angle (VEA)

I

I 1

+Physical

,

tGraphic

i

Figure 5.7. Variable emn of angle with task conditions and disparity conditions.

5.3.2 Spatial errors 5.3.2.1 Constant errors of distance (CED) and constant errors of angle (CEA)

The average constant erron of distance (CED)was very small. 0.13 mm, and was not significantly different from zero. The avcrige constant errors of angle (CEA) was 0.96 degree, not significantly different from the specified target angle.

5.3.2.2 Variable errors of distance (VED)

Task conditions had main effects on the variable distance error (VED). F(1.7) = 6.34, p < -05. As shown in Figure 5.6.

VED was 1.5 mm for the graphic object to the graphic

target match, increasing to 2.5 mm for the physical object to the graphic target match. This result showed that subjects achieved better accuracy (moreconsistency) in object

transportation by using visual information mther than haptic information. The disparity between haptic and graphic displays had no significant effects on VED. Similar to the 'IT measure. the orientation disparity between hlrptic and graphic displays did not affect the spatial errors of transportation process.

5.3.2.3 Variable errors of angle (VEA)

The average value of variable e n o n of angle (VEA) was 3.7 degrees. There was a significant two-way interaction between disparity and task conditions, F(1.7) = 2 1.22, p < .01. As shown in Figure 5.7, the greatest VEA of 7.4 degrees occurred for the physical object to the gnphic target match when there was disparity. For the other three conditions, VEAs were similar: with no disparity, 2.5 degrees for the physical object to the graphic target match md 2.3 degrees for the graphic object to the graphic target match; with orientation disparity. VEA was 2.4 degrees for the graphic object to the

graphic target match. In terms of VEA, subjects had difficulties accuritely orienting the physical object to the target when dispurity was present. Patterns of VEA data in Figure 5.7, arc opposite to those of OT data in Figure 5.5B.It should also be noted that target

distances had no significant effect on VEA.

5.4 Discussion 5.41 Optimum human perlormance with no disparity

Results from this experi rnent supported our first hypothcsis: human performance was better when there was no spatial orientation disparity between haptic and griphic displays of objects. With no disparity, there was no diffcrcnce in object manipulation between a

subject using hoptic information md using visual information about objects. It is not clear. in the case of no disparity. which information was nctudly used by the subject to perform the [ask. The theory of visual dominance (Posner et ul.. 1976)suggcsts that visual information may be the primary source guiding object manipulation. However, as suggested by the findings of Experiment 2, both haptic and graphic displays played a role

in object manipulation, md therefore it was the consistency between haptic and griphic displays that resulted in optimum human performance. Huptic and griphic displays are superimposed in the real world. Thus. "natural" object manipulation in the real world generdly yields optimum perforrnmcc. compiurd with that in a virtual environment, where there are often disparities between haptic and

gnphic displays. This suggests advantages of an augmented environment which has gnphic displays superimposcd on physical objects.

5.4.2 Effects of disparity on orientation only The orientation disparity only affected the orientation process, not the transportation proccss. If the orientation disparity is considered as an input for the orientation proccss. it only influences the output of the orientation process. In other words, rhc trmsportntion

process is independent of the orientation disparity between hnptic and grqhic displays. This result was not predicted by our hypothesis. In compiuison. the target distance was considered as an input for the object transportation process, but it affected the object orientation proccss (Figure 5 SA).

5.4.3 Roles of haptic and visual information The orientation time was shorter for the physical objcct to the graphic target matches than for the graphic object to the graphic target matches when the disparity was present. This indicates that the subject was able to make use of hoptic information to fiucilitate the object orientation speed. The evidence supports our suggestion that the subject may usc both visual and heptic information to perform manipulation, that is, visual dominance does not mean that visual information completely overwrites the haptic information presented to the subject. The extri orientation time for the graphic objcct to the gnphic target matches with disparity could be because the motor control processing was somehow interrupted or made more complex by the disparity between haptic and griphic displays. Spatial ermr measures, however, demonstrited quite a different picture from the temporal measures in terms of the disparity effect. The variable angle error was much smaller for the gnphic object to the gnphic target match than that for the physical object to the graphic target match. This indicates that accuracy of object orientation control is

generally influenced more by the visual information than by haptic informrtion. When subjects were asked only to use the hrpdc information and discard the visual information. signi ficant spatial uncertainty occurred. The increase in orientation errors for the physical object to the graphic target match may be attributed to the interference from the dispivate visuill information. Altogether. the results suggest that haptic information and visual information may affect different aspects of the orientation process. Haptic information may be more reluted to the manipulation speed while visual information i s

more related to the manipulation accumcy. An alternative explanation is that the subject employed a speedtaccuracy tradeoff to complete the task.

5.4.5 The structure of object transportation and orientation

Object transportation and orientation hd a parallel, interdependent structure, consistent with Experiments I and 1. Specifically, the time course of trmportotion contained that

of orientation. The change in target distances affected both 'IT and OT. This experiment

showed that certain factors (orientation dispurity) only affected one component of the object mani pulntion processes while other fiicton (e.g., target distance) affected both processes. To identify the role of these factors may provide further insight into the underlying mechanisms for object manipulation in vinuul environments.

5.4.5 Implications lor HCI design

The above finding provides implications for HCI design. If interaction tasks involve only object transportation such as pointing. the graphic object may be designed with arbitrary orientation in relation to the controller. In current 2D graphic user interfaces, for example, the graphic m o w cursor usually has a fixed orientation of 45 degrees,

regardless of the mouse orientation. This orientation design may have no effect on the pointing tusks where the cursor is only required to do translation movements. However, if tasks require object rotation such as multiple dimensional manipulation in virtual environments, then the orientation of a 3D cunor relative to the controller will be critical. Our results indicate that user's performance will be better if the orientation of the cunor is properly aligned with that of the controller.

5.5 Conclusions It is concluded from Experiment 3: 1. Humans achieve optimum object manipulation pet-f'ormance when haptic and graphic

displays of objects are superimposed, and consistent.

2. Dispari~yin orientation between huptic and gmphic displays of objects affects the object orientation process, increasing the orientation time for the gmphic objcct to the graphic target matches and the spatial errors for the physical object to the graphic target matches.

3. Disparity in orientation between haptic and graphic displays of objects has no significant effect on the object transportation process. 4. Object transportation and orientation processes have a parallel, interdependent

structure, regardless of orientation disparity between hrptic and griphic displays of objects.

Chapter 6 Experiment 4: The Role of Contextual Haptic and Visual Constraints

6.1 Introduction

Virtual environments generally afford both domain constmints and contextual constraints for humrn object manipulation. Domain constraints on object manipulation include intrinsic properties of the controller, cursor and target. Contextual constraints are the surrounding information for the object manipulation. Both domain and contextual constriints can affect human performance on object mmipulation. Experiment 4 was designed to explore the role of contextuul haptic and visual constraints on object tr;msponi.ttion and orientation.

One aspect of contextual haptic constriints i s the passive haptic feedback in augmented environments where the surrounding graphic cues are augmented with physical cues. Recent research shows that such passive haptic feedback can not only

provide the realism of vinunl environments (Hoffman, 1998), but also enhance human pcrforrnance (Lindeman, Siben and Hahn. 1999). Lindeman et rl. compared humrn performance on docking a graphic object to a "floating" graphic panel with the pmcl augmented with a physical paddle. Thcy found that the passive haptic feedback with the paddle resulted in a 44%decrease in the movement time and a 38% increase in accuracy. Lindeman et al. (1999) did not explicitly address the problem of degrees of

freedom @OF) for object manipulation in their studies. Hinckley et al. ( 1997) conducted

99

;m experiment to compare two

DOF rotation with three DOF rotation. They found in an

orientation matching task, that users completed the task up to 36% faster when using three DOF input than two DOF input, without significant loss of accuracy. Zhai ( 1995) reported a study on r six degrees of freedom elastic controller for object manipulation tasks. The elastic constraint c m be considered as a kind of haptic

constr~inton thc controller. Thcy found [hut human perforn~mcr\i.;ls better*WIWIthe

elastic device was used, compared with isometric dcvices. They suggested that the elastic property of the controller provided more sensitivity for position control. It is necessary to further extend these studies into the structure of object tr~nsportationand orientation. Contextual visual constmints may facilitate human performance in object manipulation (Servos. Goodale, and Jakobsen. 1992; Arthur et al., 1993). The graphic background such as a checkerboard or groundplnnc has bccn widely uscd to cnhancc depth cues in virtual environments (Balakrishnan and Kunenbach. 1999). Recent research by Robertson. Czenvinski and Lmon (1998) suggested that the graphic background could improve user's spatial memory for information visualization. The role of contextual visual constraints in object manipulation needs further investigation. In this experiment, we used a physical table to provide the contextual haptic constraints for object manipulation. We compared human performance on object mmipulation on the table surface with that in free space. The movement on the table surface had fewer degrees of freedom than in free space. A graphic checkerboard. a "virtual table", served as the contextual visual constraint. The physical table surfice was overlaid with the checkerboard. We tested three hypotheses:

Contextuul haptic constraints enhance human performance on object twnsport;ltion and orientation;

Contextual visual constraints fici litate human performance on object trmsportation and orientation; Object tmnsportation and orientation hove the same parallel. interdepcndcnt structure. ss shown in previous experimcnls. ngxdlcss of contcxtud haptic md visual

constraints.

6.2 Method 6.2.1 Subjects

Eight university student volunteers were each paid $10 for participating in one, two-hour experimental session. All subjects were right-handed. and had normal or comctcd-tonormal vision. Subjects all had experience using a computer. Informed consent was provided before the experimental session.

62.2 Experimental setup

This experiment was set up in the Virtual Hand Laboratory (VHL).as shown in Figure 6.1. The controller was a wooden cube of 30 mm. The graphic cursor cube of 30 mm was drawn to be superimposed with the wooden cube. The target was a stationary

wire frame graphic cube of 30 rnm. The graphic target was located 70. 140 or 2 10 mrn away from the starting position in the midline of the subject's body, with an angle rotated

22.5 or 45 degrees clockwise about r verticrl axis.

Figure 6.1. The Vinuul Hand Laboratory setup for Experiment 4. The stippled part of the table surfice is removable. When this pan is removed. the subject manipulates the wooden cube in free spilce. The graphic target cube (dashed line) is drawn to the table surfxe. The wooden cube (solid

line) is the controller.

As shown in Figure 6.1. one part of the table surfilce was removable. The other

part of the table with the same height was used to suppon the controller at its start position. The graphic target was perceived by the subject through the mirror, as i f i t sat on the tuble surface. When the table surface was present, the subject could slide the controller on the table surfice;

when the table surface was removed. the subject had to

move rhe controller to !he targel in the air. without thc tablc as a supporting surfacc. Thc

pan of the tuble surface at the stan location of the controller was always present so that the controller was supported at the beginning of the movement. This setup provided three contextual haptic conditions: table-slide, trblc-lift and no-table. The table-slide condition was when the physical table was present and the subject was instructed to slide the controller on the table surface. The table-lift condition was when the physical table was present. but the subject was instructed to slightly lift the controller from the table surface and land the controller on the table surfrrcc. The no-table condition was when the table was removed and the subject had to movc the controller in the air to i t s final position. In all cases. the subject wos to align the cursor cube with the

target cube. In the table-slide condition, the wooden cube (controller) and the cursor cube were constmined to three degrees of freedom. two for translation and one for rotation. In the trble-lift condition, the wooden cube and cursor cube were constrained to three degrees of freedom only at the start and the end of the movement; they had six degrees of freedom for free motion between the start and the end. In the no-table condition, the wooden cube

and the cursor cube had six degrces of freedom after i t left the start position.

A black and white checkerboard was displayed with u block size of 13.5 by 13.5 mm. The checkerboard was superimposed on the planar table surface. When the checkerboard was not present, objcct manipulation was performed on a black background. The light under the mirror was off at all times. Subjects saw only the

graphic cursor and target, with no vision of the hand and the wooden cube during object manipulation

6.2.3 Procedure System calibration was performed as described in Chapter 2. The workspace on the table surface, including the checkerboard, was calibrated so that the checkerboard was aligned to the tabletop. The cursor cube was registered to superimpose with the wooden cube.

'The individual subject eye positions were also caiibr~tedto obtain a customizcd, stercoscopic, head-coupled view. The task was to match the location and angle of the cursor cube to thosc of thc graphic target as f i s t and accuritely as possible. Trials were blocked on contextual haptic constraint and visual constraint conditions. Target distances and angles were randomly ordered over trials within each block. Ten trials were repeated in each experimental condition. At the beginning of each block, subjects were given 20 trials for practice.

62.4 Data analysis Independent variables for this experiment were contextual haptic constraints, contextual visual constraints, target distances and target angles. Temporal dependent measures

were: total task completion time

(a), object transportation time (TT), object orientation

time (OT), and relative time courses between object transportation and orientation processes. Spatial error measures were: constant errors of distance (CED), vwiable erron of distance (VED), constant errors of angle (CEA), and variable errors of angle (VEA). ANOVAs were performed on the balanced design of 3 huptic construirrts x 2 visuul

6.3 Results

6.3.1 Temporal measurer 6.3.1.1 Relative time courses

As shown in Figure 6.2A - D.in all experimental conditions, object trmsponiltion and orientation processes had a parallel structure. The transportation process always contained the orientation process. These results were consistent with the findings in previous experiments. As shown in Figure 6.2A, contextual haptic constraints showed significant impact on object transportation time (IT) and object orientation time (OT). In contrast, contextual visual constr~intshad little effects on TT and OT, as shown in Figure 6.28. The effects of contextual haptic and visual constraints on the tcmponl measures

are reported in detail in the following sections.

Time courses of transportation and orientation procc sse s , TTIN tahlc I

--

r--

--

--, 0 2 0 0 400 600

800

--

--

J

I

1000 1200

Time (ms) I

Figure 6.2A. Time courses with haptic constraints. Ntuble = moving in the air without the table: Tlift = lifting from the table during the movement; Tslide = sliding on the table during the movement.

I I

Time courses of orientation pmce sse s

Time (ms)

and

! I

_i

Figure 6.28. Time courses with visual constraints. Bon = graphic checkerboard on as background during the movement; Boff = graphic

checkerboard off, replaced with the black backgroundduring the movement.

/

Time counes of transportrtion and

I

0

--- -- -

200

400

600

800

Time (ms)

-

--

-

II

loo0

- --- .

Figure 6 . X . Time courses with target distances.

I

Time courses of trmsponation and orientation pmce ssc s

I

Time (ms)

Figure 6.2D.Time courses with target angles.

6.3.1.2 Task completion time (CT)

The average task completion time (CT)across dl conditions was 933 ms. Hoptic constraints had significant effects on CT,F(2, 14) = 85.90, p < .001. CT was 1 192 ms when moving in the air. Sliding the physical object on the table resulted in the averige CT of 749 ms. lifting on the table took 778 ms. As expected. CT increased significantly with the target distance (F(2. 14) = 359.57, p c .M)l), 799 ms at 70 mm, 927 rnrn and 1074 ms at 110 mm.

IM ili

140

CT increased as a function of the target nnglc (F(1,7) =

9.77, p < .05), from 918 ms at 22.5 degrees to 948 ms at 45 degrees.

There was r significant interaction between the haptic condition and target distance. F(4.28) = 20.45. p < .001. Post hoc malysis was performed on the three haptic constraint conditions for each target distance separately. Results revealed that for each target distance, the no-table condition had a significantly longer CT than thc table-slide and table-lift conditions (p < .05). CTs between the table-slide and table lift conditions

did not significantly differ from each other. As shown in Figure 6.3. the longer the target distance. the larger the no-table effcct on CT. There was a significant interaction on CT between the target distance and angle, F(2, 14) = 1 1.21, p < .001. As shown in Figure 6.4, as the target angle increased, the

increase in CT was more evident at the tatget distance of 70 mm than at 140 or 210 mm. There was a three-way interaction among huptic conditions, visual conditions and target angles (F(2, 14) = 8.14, p < .0 1). In general. the checkerboard condition resulted in

a similar CT,as shown in Figure 6.5A. 6.58 and 6.K. However, with the no-table condition, the presence of checkerboard appeared to cause an increase in CT at 22.5

degrees.

Trdnsponation time

-------

Table Slide Table Lift

(IT)

No Table

Figure 6.3. Task completion time with haptic constraints and target distances.

Task completion time (CT)

Degree

+45

Degree I

Figure 6.4. Task completion time with target distances and angles.

--I I

,

I

-

Tisk completion time (CT) in Tableslide

t

22.5 Dc gre c 045Dcgrec

Figure 6SA. Task completion time with visual constraints and target angles in the table-sliding condition.

2 2 . 5 Dcgrcc

~

45 Degree

I

Board Off

Board On

Figure 6.58. Task completion time with visual constraints and target angles in the table-li fting condition.

---

r-

-----

I

I

Task completion time (CT) in NoTable

I I

I

8 22.5 De gree 0 4 5 Dcgrcc

600-

---

7

Board Oft'

-- - --

Board On -

-

-

L

%..A

Figure 6 . K . Task completion time with visual constriints and target angles

in the no table condition.

I

Tabk Slide Tabk Lft

No Tabk

Figure 6.6. Transponation time with haptic constraints and target distances.

Tnnsponation time (IT)

Figure 6.7. Transportation time with tiuget distances md angles.

Transportation time (TT)

I

L

Board Off

Board On

j

Figure 6.8. Transportation time with visual constraints and target angles.

6.3.1.3 Transportation time (TT)

The ilverige transportation time ('IT) was 906 ms, taking up 97% of the task completion time (CT) of 933 ms. Results of TT were similar to those of CT. Huptic constraints had effects on TT.F(2. 14) = 78.62. p < .001. 'IT was 749 ms in the table-slide condition. 778 ms in the table-lift condition, and increused to 1192 ms in the no-table condition. TT

inircascd with target distance (F(2. 14) = 336.86. p < .001). 753 ins ut 70 nlm, 903 INS a1 140 mm,and 1062 m s at 210 mm. Heptic conditions interacted with target distance to affect TT. F(4.98) = 16.98. p c .001. as shown in Figure 6.6. Post hoc analysis was performed on haptic constmint conditions for each target distance separately. Results revealed that for each target distance, the no-tablc condition had a significantly longer TT that the table-slide and table lift conditions (p < .05). TTs betwccn the table-slide and table lift conditions did not

significantly differ from each other. The longer the target distance. the larger the no-table effect on TT. The target angle interacted with the target distance, F(2. 14) = 4.55. p < .05. as shown in Figure 6.7. The effect of target angles on TT was more evident at the distance of 70 rnm than the other distances,

Visual constraints significantly interacted with the target angle. F(1.7) = 6.90, p < .05. It appeared that at 22.5 degrees, the presence of the checkerboard resulted in a larger increase in 'IT than at 45 degrees, as shown in Figure 6.8.

6.3.1.4 Orientation time (OT)

The orientation time (OT) was 533 rns on rverdge, 57% of the task completion time (CT).

The effects of haptic constraints on OT was significant. F(3, 14) = 4.38, p c .05, and showed an opposite trend to those on TT,as demonstrated in Figure 6.9 (compmd to Figure 6.6). The longest OT occurred in the table-slide condition, 560 ms, and reduced to 536 ms in the table-lift condition, and thcn t'urthcr dccrcascd to 5 0 4 ms in thc no table

condition. Post hoc analysis revealed that OT in the no-table condition was significantly different from that in the table-slide condition (p c .05). There was no significant difference in OT either between the no-table and table4 ft conditions or the tablc-lift and table-slide condi [ions. As expected, OT increased significantly with the target angle, F( 1.7) = 150.93. p < .001. On average, OT was 456 ms for 22.5 degrees and 6 10 ms for 45 dcgrccs. Them was a significant interuction between visual constraints and target angles, F( I , 7) = 10.18, p < .05. as shown in Figure 6.10. It appeared that the checkerboard had more impact on OT at 45 degrees than at 22.5 degrees of target angle.

An interxtion on OT was found between the target distance and angle, F(2, 14) =

8.22, p < .01. As the target distance increased, the difference in OT between two target angles became smaller (Figure 6.1 1). As in previous experiments. the target distance had effects on OT, indicating m interdependent structure of object tr~nsportrtion and orientation processes.

Orientation time (OT)

Tabk Lift

Tabk Slide ---

No Table

----

--

I

Figure 6.9. Orientation time with haptic constraints.

Orientation time (OT)

I

Board OtT

I

Dcgrcc

Board On

Figure 6.10. Orientation time with visual constraints and torget mgles.

I

Orientation time (07')

Figure 6.11. Orientation time with target distances und angles.

6.3.2 Spatial errors

6.3.2.1 Constant errors of distance (CED) and constant crrurs of angle (CEA)

Overall, the spatial erron were very small. The average value of constant erron of distance (CED) was 0.04 mm, not significantly different from the target location. The haptic constraint and target angle had main effects on CED, but the change in CED was trivial, ranging from 0.09 mm to 0.28 mm.

The average value of constant errors of angle (CEA) was 0.8 degnr, not significantly off the target angle. There was r main effect of the haptic constraint on

CEA, F(2, 14) = 4.26, p < .05. CEA increased from 0.4 degree in no-table, to 1.0degree

in table-lift, md to 1.1 degree in table-slide. There was an interaction between visual constraints and target distances, F(2, 14) = 4.52, p < .05. 1t appeared that the checkerboard resulted in a larger CEA at distances of 140 md 2 10 mm than at 70 mm. 116

6.3.2.2 Variable errors of distance (VED) Haptic constraints, visual constraints, and target angles had main effects on the variable

errors of distance (VED). VED increased a small amount with changing haptic constraints (e.g., 1.5 mm in the no-table condition. 1.8 mm in the table-lift condition. and 1.9 mrn in the table-slide condition), F(2, 14) = 5.19, p < .05, as shown in Figure 6.12.

Post hoc analysis rcvcalzd a sipnificil~~t diffrwwr: i t r VED Mwern ihr iublr-slidc: and notable conditions (p < .05), but no difference either between the table-slide and table-lift conditions or between the no-table and table-lift conditions. The presence of the checkerboard background increased VED from 1.6 mm to 1.9 mm (F(1.7) = 16.54, p < .Ol), us shown in Figure 6.13. The result was unexpected, even though the amount of increase in VED was small.

It was intcrcsting to note that as the targct angle became larger, VED decreased from 1.8 mm to 1.7 mm, F(1.7) = 5.75, p < -05,as shown in Figure 6.14 Even though the amount of the change in VED due to the target angle was rather small, it was consistent. This was the case showing that spatial outputs of the trinsportation process were affected by the input for orientation process, the target imgle. There was an interaction on VED between the haptic constraint and the target distance, F(4, 28) = 3.67, p < .05, as shown in Figure 6.15. Post hoc analysis revealed that at the target distance of 140 mm, VED in the table-slide condition was significantly larger than that seen in the table-lift condition or in the no-table condition, but VEDs between the table-lift condition and the no-table condition did not differ from each other, The differences in VED among hoptic constraints were not significant at target distances

of 70 mm and 210 mm.

There was a three-wg interaction on VED among the hrptic constraint. visual constmint and target distance. F(4, 28) = 4.33, p c -01. As shown in Figure 6.16A, 6.168 and 6.16C. VEDs consistently increased in the prescnce of the checkerboard at the target distance 70 mm across haptic constr;rints. The chec kerbonrd appeared not to make a difference in VED in the table-lift at 210 mm or in the no-tabIe at 140 mm.

-

--

----

-- -----

-------

.

Vrrriable errors of distance (VED)

Table Slidc

Table Lift

No Table

Figure 6.12. Variable errors of distance with haptic constriints.

1 I

Variable errors of distance (VED)

I

1

Figure 6.13. Variable errors of distance with visual constraints.

-

--

-

-- - - -. --

-

Variable errors of distance (VED)

1

I

22.5 Degree

45 Degrec

Figure 6.14. Variable errors of distance with target angles.

Variable errors of distance (VED)

I

L--

Table Slide A

-

Tabk Lift

Nu Tabk -1

Figure 6.15. Variable erron of distance with haptic constraints and target

distslnccs.

Board Off

Board On

Figure 6.16A. Variable errors of distance with visual constri.Jintsand

target distances in the table-slide condition.

Variable enors of distance (VED) in TableLift I

Figure 6.168. Variable errors of distance ..vith visual constraints and target distances in the table-lift condition.

Variable errors of distance (VED) in NoTablr:

Board Off

Board On

Figure 6.16C. Variable emors of distance with visual constraints and target distances in the no-table condition.

6.3.2.3 Variable errors of angle (VEA)

The variable errors of angle (VEA) had an average value of 2.0 degrees. The haptic constmint had significant effects on VEA, F(2. 14) = 12.27. p < .00 1. VEA was 1.7 degrees in no-table. 2.1 degrees in table-lift. and 2.3 degrees in table-slide, as shown in Figure 6.17. Post hoc revealed that V E A in the no-table condition was significclntly smallcr than that both in thc tablc-lift and tablc-slide conditions (p < .05). Thew was no di fferencc in VEA between the table-li ft and table-slide conditions.

The checkerboard led to a small increase in VEA. F(1. 7) = 6.56. p c .05. from 2.0 to 2.1 degrees (Figure 6.18). It was unexpected that the presence of checkerboard had

detrimental effects on both VED and VEA. Figurc 6.19 showed that VEA was different among target distances (F(2. 14) = 3.75, p < -05). 2.1 degrees at 70 mm, 1.9 dcgrccs at 140 mm and 2.0 degrees at 710 mm.

The target distance, as an input for the transportation process, affected the VEA, the output of the orientation process.

There was a three-way interxtion on VEA among the haptic constraint, visual constraint and target distance, F(4, 28) = 4.35, p < -01. As shown in Figure 6.2OA. 10B and 20C. VEAs were similar or generally increased with the presence of the checkerboard with the exception in the table-lift condition at the target distance of 70 mm. In this particu tar condition, it appeared that the presence of the checkerboard reduced VEA.

--

71

t

I

Variable erron of angle (VEA)

I

I 1

Table Slide .

Tabk Lift

No Table

Figure 6.17. Variable errors of mgle with hnptic constraints.

--.-

Variable errors of angle (VEA)

Board Off

I

Board On

Figure 6.18. Variable errors of angle with visual constraints.

Variable ermn of angle (VEA)

I I

Figure 6.19. Variable errors of angle with target distances.

Variable errors of angle (VEA)

Board Off

Board On

Figure 6.20A. Variable emrs of angle with visual constraints and target distances in the table-slide condition.

-

~

I

---------

Viuiable errors of angle (VEA)

1

in T;lbleLift

I

I

!

Board Off

L

---.

-

-

Board On

-

I

,

Figure 6.20B. Variable erron of angle with visual constraints and target distances in the table-lift condition.

I

1

Variable errors of angle (VEA)

Board Off

Board On

Figure 6.20C.Variable errors of angle in the no-table condition with visual constraints and target distances.

6.4 Discussion 6.J. I The role o f contextual haptic constraints

Haptic constriints had profound effects on humi\n performance in object transport ation and orientation. The task completion time (transportation time) was reduced dram;ltically with the tabletop, compared to when no supporting surface was present. This was ru~~aislrni wilh ihr speed findings by Lindcman et ill. ( 1999). This result also supports

our suggestions in previous experiments that hnptic information has more impact than visual information on object manipulrtion spccd. The fact that the contextual haptic constraint was imposed in the control space indicates the importance of human motor control systems in object manipulation.

In Experiment 4, the task required three degrees of frccdorn. When thc controller was moved in free space, it had six degrees of freedom. The degrees of freedom were reduced to three when the controller slid on the tabletop, and the controller actually became a three-degree-of-freedominput device. Jacob et ill. (1994) found that object manipulation speed increased when the structure (dimensions)of tasks was matched with the structure of input devices. Hinckley et al. (1997) found that usen manipulated objects faster when using three DOF input devices than two DOF input devices for three DOF orientation tasks. The results of task completion time from Experiment 4 were consistent with previous findings by Jacob et al. and Hinckley et al.

No significant differcncc was found in task completion time between table slide and table lift conditions. At the end of movements, the controller was constrained to three degrees of freedom for both table slide and table lift. This suggests that the contextual hrptic constraint on the end of the movement or the target is more critical than

during the course of the movement. Similar results were found for a 3D pointing task where subjects pointed to a solid target laster than to a hole (MacKenzie. 1992). MacKenzie suggestcd that the solid target (hapdc constraint) helps to stop the movement. compared to pointing to a hole where subjects have to take extm time to decelerate the

pointing device. At thc snmc tirnc. however. the hapiic consir;rii~[.ilw table surraca, cunsis~rn~iy increased the spatial variablc errors in task performance. although the increase was quite small. This finding is counter-intuitive md contrary to Lindeman et al.*sresults (1999).

Hinckley et al. ( 1997) did not find that the constmints on degrees of freedom for object orientation had effects on spatial errors. It is not clear what ficctors in these experiments caused such inconsistency.

6.4.2 The role of contextual visual constraints It was originally predicted that the contextuul visual constraint would be used as guidance

for object manipulation. It is surprising that the visual constmint, the checkerboard background, generdly deteriorated human performance in both times and spatial errors. Even though the effect of visual constraint was small, it is thcoretic;llly and practically imponmt. It appeared that the visual constraint. the checkerboard, interfered with the

manipulation task rather than provided extra visual cues to enhance object manipulation performance. One inlerp~tationi s that the checkerboard background distracted the subject's attention from the target. Other factors such os the pattern md color of the checkerboard might also contribute to the interference.

The detrimental effect of the checkerboard may be limited the unique features in

this experiment. However, if the same effect is replicated with various depth cues such as groundplanes or stereoscopic view, it poses an important question. Can depth cues of griphics, supposed to help in visualizrtion, benefit interaction in gencril? Results from previous rescarch are not conclusive (Wickens. 1992;Arthur et al.. 1993; Cao et 31.. 1996; Boritz. 1998: Robertson ct al., 1908). Thc thcor): of tho \isual pii~11w;lysby

Goodale. Jakobson and Servos ( 19%) suggests that the depth cues that facilitate perception may not necessarily benefit action (object manipulation). Reccnt research by Boritz (1998) shows that the depth cues provided by head-coupled view were generilly detrimental for object docking in virtual environments. Cao, MlrcKenzie and Paynndeh (1996) found that the depth cucs from stereoscopic view helped for certain tasks. bur not

for others. Our results show that the cucs provided by thc chcckcrboud actually hindered object mmipulution, yet the chcckcrboilrd-like background has been widely used in human-computer interaction design (Balukrishnim and Kurcenbach. 1999). It is possible that the depth cues provided by the checkerboard or head-coupled setup may benefit object perception more than object mmipulution. The role of contcxtui~lvisual constraints on object manipulation in virtual environments certainly needs funher investigation.

6.4.3 The structure of object transportation and orientation Experiment 4 again showed that object transportation and orientation have a pmllel. interdependent structure. The object tmnsportrtion process contained the orientation process. The target distance had effects on object transportation time as well as

orientation time, that is, object transportation and orientation were interdependent. This structure was persistent over all experimental conditions, consistent with the results from previous experiments.

64.4 Implications for HCI design

The Jrsign of k i n u d anJ i l u p i ~ ~ ~environments kd should take advantage of passive haptic constraints. The significant benefit gained in the task completion time from the

haptic constraint can be weighed against the relatively small reduction in accuracy control. Passive hilptic constrclints such as table surfices, walls or pilddles arc cheap md reliable, and could be easily implemented in vinual environments. For example. a grclphic 1D command menu in vinual environments could bc augmented with a physical plate whcncvcr possible. Recently, force feedback devices have been implemented in vinual environments to enhance the realism of interaction. It is generally believcd that force feedback devices cm improve human performance in accurwy control. We found that the significmt

benefit of haptic constraints is for speed control rat her than accuracy control. Thus, more attention should be paid to utilization of force feedback devices for applications where speed is a major concern. We issue a caveat about the role of contextual visual constraints in virtual environment design. The results of Experiment 4 suggest that the background depth cues

may benefit object perception, but actually d e g r i object manipulation. For example, in medical vinual reality applications, a checkerboard background may be used for diagnosis, but not for surgery. We cannot assume that graphic cues beneficial for

perception will always be beneficial for interaction. The effects of contextual visual constraints must be carefully evaluated before implementation. Contextual haptic and visual constriints have significant effects on humrn performance on object manipulation in virtual environments. Research in this area is very limited. compared to research on domain constrdints of object mmi pulution. Contextual conslruints iitr ubiquitous in virtuid mvironments, and human-computer

interaction design should be guided by understanding contextual constrilints as well is domain constrtrints.

6.5 Conclusions It is concluded from Experiment 4:

Contextual haptic constraints, such as u table surface, improve human performance in the task completion time, but slightly reduce the spatial accuracy. Contextual visual constraints, such as a checkerboard background. degride humrn performance on object manipulation for both speed and accuracy. Object transportation and orientation have r pawl lel, interdependent structure over various contextual haptic and visual conditions.

Chapter 7 Summary and Discussion

The objectives of this research were to identify the structure of object transportation and orientation in virtwl environments, to investigate thc role of haptic and visual information in object transportation and orientation, and to provide implications for humon-computer interaction design. In this chapter, we review briefly the four experiments, summarizing the major results from each experiment. The Bndings from all four experiments are then discussed with respcct to the above objectivcs.

7.1 Review of the four experiments 7.1.1 Experiment 1:The structure of object transportation and orientation

Experiment 1 was designed to explore the basic structure of object transportation and orientation by the human hand. The experimental conditions are shown in Table 7.1. Object manipulation was performed under two visual conditions: full vision of the hand

and controller md no vision of the hand and controller. In a typical virtual environment, the visual condition is expected to be somewhere between these two visual conditions.

For example. r user in virtual environments may see s graphic representation (e.g., cursor) of r controller, but may not see the hand or have the graphics superimposed with the controller. Thus, the results from these experimental conditions can be considered i s a reference point for human performance. Since the condition of full vision of the hand and controller is close to object manipulation in the real world, the experimental results

under this condition can be assumed to indicate a "natural" object manipulation structure. 131

The target distances nnged from 30 mm. 100 mm, to 200 mm, reflecting r normal reach workspace for desktop computers (Wsng, DM and Sengupta. 1999). The target angles of 22.5 and 45 degrees allowed r subject to orient the controller while retaining a comfortable posture. Object manipulation in a normal workspace is common in daily human-computer interaction tasks.

Table 7.1. Experimental conditions for Experiment I. Target Distance

T~rgetAngle

Visual Condition

30 mm

22.5 degrees

Hand and controller

LOO m m

45 degrees

No hand and controller

I

200 mrn

It was found that object trimsportation and orientation processes had a pwallel.

interdependent structure. The object transportation process contained the orientation process even in the condition with a very short tmget distance (30 mm) and a large target angle (45 degrees). The target distance not only increased the transportation time. but also the orientation time. Meanwhile. the target angle had significant effects on the transportation time as well as on the orientation time. This structure was retained regardless of whether or not r subject could see the hand md controller. Without vision of the hand and the controller. spatial errors of object manipulation increased significantly. It appeared that object tnnsportation errors increased more dramatically than did orientation errors. The visual conditions did not affect the total task completion time.

7.1.2 Experiment 2: The effect of controller, cursor and target sizes Experiment 2 was designed to examine the interactions among three biisic elements of object manipulation: the controller, the cunor and the target. This was achieved by investigating the effect of object size, n common attribute of the controller, the cunor and the target. as shown in Table 7.2. Variations in the relative size among the controller. the cursor and the target actually changed the relationship between what suhjects felt in the hand and what they saw with the eyes during object manipulation. Therefore, the results of this experiment provided insight into the interplay of haptic and visual information in virtual environmcnts.

Table 7.2. Experimental conditions for Experiment 2. I

Target Distance

Controller Size

Cursor Size

Target Size

LOO rnm

20 mm

20 mm

20 mm

200 mm

50 mrn

50 mm

50 mm

!

I t was the relative size, rather than the absolute size, of the controller, the cursor

and the target that significantly affected object transportation and orientation processes. There were significant interactions between controller size and cursor size as well as between cunor size m d target size on the total task completion time. trmportiition time. orientation time md spatial errors. The same size of controller and cursor ficilitated object manipulation speed, where the haptic feedback of the controller was consistent with the visual feedback of its graphic presentation, the cursor. I t appeared that the same

site of graphic cursor and target provided strong visual feedback, which accordingly improved object manipulation accuracy. The object transportation and orientation showed a parillel, interdependent structure similar to that in Experiment 1. The object transportation process contained the orientation process over all conditions. The target distance affected both transportation time and orientation time.

7.1.3 Experiment 3: The effect of spatial orientation disparity between haptic and visual displays

Experiment 3 further examined the role of haptic and visual information in object manipulation, specifically, the effects of orientation disparity between the hoptic md gnphic displays of objects. The disparity was realized by rotating the graphic cursor 30 degrees from the physical controller position, as compared with the condition of completely suprimposing the cursor with the controller (Table 7.3). There were two tasks: physical to graphic match and graphic to graphic match. In physical to gr~phic matches, subjects wen required to match the physical controller to the grdphic target location and mgle. For the graphic to gnphic match task, subjects matched the graphic cunor to the graphic target, regardless of how the controller felt.

Table 7.3. Experimental conditions for Experiment 3. L

Target Distance

Disparity

Task

100 rnm

0

Physical to graphic

200 mrn

30 degrees

Graphic to graphic

r

Human performance on object orientation was optimum with no orientation disparity, that is, when the griphic cursor was completely aligned with the physical controller. Orientation disparity between haptic md griphic displays of objects increased the orientation time for griphic to graphic matches and spatial errors for physical to graphic matches. However, orientation disparity had no effect on the object tr;msport;rtion process. neither transportation time nor spatial errors. As in Expriment 1 and 2, object transportation and orientation showed a pwnllel.

interdependent structure in all conditions. The ti me course of the trmsportation process covered that of the orientation process. The target distance affected both object tnnsportilt ion time and orientation time.

7.1.4 Experiment 4: The role of contextual haptic and visual constraints

Experiment 4 focused on different aspects of object manipulation from the previous experiments: contextual huptic and visual constriints in the surrounding environment. Previous experiments studied how the intrinsic and extrinsic properties of objects in the environment affected human performance. This experiment investigated how the "environmental" constnints or surrounding information influenced object transportation and orientation. The experimental conditions are shown in Table 7.4. A physical table surface was used to provide contextual haptic constraints on object manipulation. Subjects were instructed to either slide the controller on the table or lift the controller from the table to match the target. When the table surface was removed. subjects moved and terminated the controller in the air. A graphical checkerboard background was used as P "virtual table" to provide contextual visual constnints on object manipulation.

Table 7.4. Experimental conditions for Experiment 4. Target Distance

Target Angle

Haptic Constraint

Visual Constraint

70 mm

22.5 degrees

Table slide

Checkerboard on

140 mm

45 degrees

Table lift

Checkerboard off

210 mm

No table

Contextual haptic constriints, e.g.. the presence of the physical table surface, signi ficilntly reduced the task completion time. The major difference occurred between the table conditions and the no table condition. The structural match between a three DOF task and a three DOF controller provided by rhc table speeded up the task completion time. Human performance was actually getting worse with the contextual visual constrliint of the checkerboard as a "virtual table" surface. When the checkerboard was present, the task completion time was longer and the spatial errors were larger than when the checkerbod was not present. With haptic and visual constraints, object tnnsportation and orientation retained a parallel, interdependent structure. In the absence of the table surface as a contextual haptic constraint, the transportation time was longer while the orientation time was shorter. It appeared that subjects were faster to integrate orientation process into transportation process in free space.

7.2 Discussion 7.2.1 The structure of object transportation and orientation

Concurrence Object trinsponation and orientation showed ;l parallel structure. Specificully, the transportation process contained the orientation process. This pattern was consistent over all experimentid conditions in all four experiments. The task completion time was almost

determined by the object transportation time. Subjects had no difficulty in simultaneously controlling both object transportation and orientation processes in the vi nual environment designed for these ex primen ts.

These findings are different from prcvious research indicating that humans tended

to control one process at il time. either tr~nsportationor orientation (Zhui and Milgrim, 1997). The difference in results may be primarily due to the different virtual environment

used for experiments. We used a vinual environment in which the display space and the control spuce were superimposed. while previous research typically employed a vinual environment which separated the display space and control space. It was noted that object mmipulation in this study was much faster and more accurate than most previous research (Ware, 1990, 1998; Anhur et al., 1993; Zhai, 1995; Boritz, 1998). The display space and the control space are superimposed in the real world where humans regularly prfonn object transportation md orientation simultaneously. Results from this study provide further evidence that spatial overlay of the display space and control space allows humans to tmsfer their object manipulation skills from the real world into vinual environments to achieve better performance. In other words. separation of the disploy space and the control space in virtual environments may disrupt the

coordination of object transponation and orientation observed in natural prehension.

However, the simultaneous control time or the orientation time relative to the transportation time did change according to various task conditions. As the tilrget distance shortened or the target angle increased, the simultaneous control time of the two processes was relatively larger. This result provides further insight into the underlying mechanism of multiple dimensional object munipulation. Jacob et P I . (1994) suggested that the concurrence or structure of multiple di mensionid interact ion w:~sdetermined hy the perceptual space, c.g., the visual attributes of objects in the display space. If the structure of the multiple dimensional interaction completely depends on the perceptual space, the quantitative change in the target distance or angle should not change the simultaneous control time between the two processes. Changes in the simultaneous control time due to the target distance md angle indicate that the structure of multiplc dimensional interxt ion not only depends on perceptual space, but also on motor control space.

Interdependence There was interdependence between the object transportation and orientation processes. The target distance as an input to object trinsponiltion process always affected the object orientation process in terms of orientation time. When the target distance was longer, the orientation time was longer, even though subjects could orient the same angle in a shorter time. It seemcd that subjects needed to nllocate ;Icertain time for object transportation alone near the target locution. Once there would be o sufficient time to process object transportation at the end, such as in the condition of r long distance or r small angle, subjects had no need to complete orientation quickly.

The target angle hild effects on the trinsportntion time in Experiments 1 and 4 (only these two experiments manipulated target angle as an independent variable). The target angles were 17.5 and 45 degrees for both experiments. Experiment 1 provided a very short target distance of 30 mm. in addition to 100 mm and 200 mm. Experiment 4

had target distances of 70 mm. 140 mm and 2 10 mm. There were interactions between the target distnnce and the target angle. The effect of target angle on transportation time was more evident at a shorter distnnce. It appeared that there was a threshold of the target angle. relative to the target distance. to profoundly affect the transportation time. Undcr that threshold. the orientation process could be seamlcssly integrated into the transponation process, costing no extra time; beyond that threshold. the object orientation could result in m increase in the transportation time. Further research is needed to explore these ideas using u wider ringe of target angles. Jeannerod ( 1984) suggested that there were "two independent visuomotor channels" for reaching and grisping by the human hand. Object transportation and orientation require subjects to hold an object in hand and are different tasks from reaching and grasping. Object transportation and orientation showed interdependence, different from the relationship between reaching and grisping. This indicates that the theory of "two independent visuornotor chmnels" for reaching md grisping can not be automatically extended to object transponation and orientation when an object like a controller is in hand. Such a distinction is important and should be further studied.

Structural model

The structural model used in these four experiments has advantages over previous research methodologies for multiple dimensional object mmi pulrtion. By decomposing

and examining the structure of the multiple dimensions, we can obtain detailed insight into the relationship of object transportation and orientation processes. As shown in experimental results, if we had used only overill measures such as the task completion time, a lot of valuable information would have been lost. An example is the results of Experiment 3. One of the interesting findings in Experiment 3 was that the orientation disparity bctween haptic and visual displays affected only the orientation process. not the transportation process. This finding may have important applications for HCI design. If we had measured only task completion time, we would not have such insight into human performance. Theoretical models such as "seven stages of action" (Norman, 1988)and "syntactic-semantic-object-actions"(Shneidcrrnan, 1992)help to understand humuncomputer interaction with qualitative description. Howcver, it is difficult to directly and quantitatively measure and validate these models. On the other hand, prediction models such as "keystroke-level model" (Cud et al., 1983) i

d

"Fitts' law model" (MacKcnzie.

1992) provide quantitative description, but generally do not deal with multiple processes

or relationship among multiple processes. The structuril model used in this dissertation lies between a theoretical model md r prediction model. The structural model addresses the qualitative relationship among multiple processes in terms of concurrence and interdependence, yet can be measured and quantified, as demonstrated in this study.

7.2.2 Effects of haptic and visual information Internlay The interplay of haptic and visual information in virtual environments on human performance was evident from Experiments 2 and 3. In Experiment 2, it was identified

that it was the relative size of objects that affected human performance for object t~ansportationand orientation. When the physical controller and its graphic representation (cursor) had the same size, subjects achieved better performance. Similarly, in Experiment 3, subjects performed better when there was no orientation disparity between the physical controller and its graphic presentation. As r general principle. one might posit that consistency between hapt ic and visurl information improves human performance in object manipulation.

In many current virtual environments. hrptic and visual information about objects presented to the user is often inconsistent. The user then nccds to rccalibrite the relationship bctween haptic and visual in formiltion of objects. This may impose difficulties on users as they transfer object manipulation skills dcvcloped in the real world

into the virtual environment. To improve human perforrnancc in virtual environments. both haptic and visual displays of objects should bc kept consistent whencvcr possible.

The role of visual information Humans use both haptic and visual infomation for object mnnipulation and there is interplay between the haptic md visurl display of objects. However, converging evidence from these four experiments suggested that the role of hnptic and visual information may be related to different aspects of human performance.

The visual information presented in vinual environments showed profound effects on object manipulation accuricy. In Experiment 1, depriviition of visual information about the hand and the controller resulted in a significant increase in spatial errors for both transportation and orientation processes. However, the visual conditions had no effect on the time measures. In Experiment 2, the cursor and the target were in the visual

display domain, and the relative size difference between them reflected the change of visual feedback conditions. It was found that the relative size of the cursor and target significantly affected spatial errors. In Experiment 3, with orientation disparity, the angle errors increased dr~maticallywhen subjects were instructed to match the physical cube to the target according to whot they felt in the hand mther than what they saw. Experiment 4 demonstrated similar results. When the visual context (e.p..the checkerboardl varied.

spatial errors were affected. In general, it appeared that subjects primarily relied on visual feedback. when it was avrihble, for control of accuracy in terminal positions.

The role of hu~ticinformation In centrist, haptic information appeared to have profound effects on the temporal outputs,

e.g., the times to complete tasks. No significant differences in the task completion time were found between visual conditions in Experiment 1, whcre the hnptic conditions were the same. In Experiment 2, the controller size relative to the cursor size significantly affected temporal outputs, e.g.. transportation time md orientation time. When subjects were asked to use only haptic feedback to complete the task in the disparity condition of Experiment 3, they did this as f w as in the condition of no disparity. This role of haptic information was evident in Experiment 4. The contextual haptic constraints, provided by the physical table, dramatically changed temporal outputs of object mmipulrtion. Results of these four experiments suggest that humans make use of hrptic informotion to facilitate the speed of object manipulation.

7.2.3 Implications for HCI design

One god of this study i s to provide implications for HCI research and design. Specific implications hove been discussed in each experiment. Here we highlight some general results. There has been a question whether or not it is necessq to have a six DOF input device for virtual environments. Our results showed that. for 3D mmipulation tasks. users were naturally able to trimsport and orient the object simultaneously. A six DOF input device may not only increase realism of virtual environments, but also effectively improve human pertonance by allowing parallel manipulation in multiple dimensions. Because the trisponiltion process is the critical path for multiplc dimensional object manipulation. more attention should bc paid to the transportation component of six DOF input devices. Augmented reality generally has graphic displays superimposed with physical objects. This research suggests advantages of augmented reality over other virtual environments. The consistency between visual and haptic displays provided by augmented environments significantly improves human performance. Augmented reality is m important option for virtual environments md should be implemented when feasible. Expensive haptic or force feedback control devices have been developed recently. However, such devices are primarily designed and intended for exploratory object

manipulation rather than performatory object manipulation. Our results demonstrate that haptic feedback can improve human performance, especially speed control. For speed critical tasks, it is important to provide proper haptic feedback. The passive haptic constraints cm serve as an economical and effective option to force feedback devices.

This study suggests an important consideration For thc user interface design. The graphic objects should be designed in relation to the properties of the physical objects. Designs in graphic space and control space can m d should be co-developed. For example. information visudiz~tionsystems should be designed with consideration of both griphic displays and control devices. Furthermore, the physical form of input devices such as size and shape should be considered as fxtors in design. Human performance in multiple dimensional object manipulation involving object transportation and orientation i s different from that in pointing. Object trmsportntion and orientation demonstrite relative size effects among the controller. cursor m d target while pointing tasks follow Fitts' law (Graham and MxKcnzie, 1996). However. since pointing can be considcrcd as i! special case of multiple dimensional object manipulation. i t is possible to develop a generalized human engineering model for

HCI that

accommodates both pointing and multiple dimensional manipulation tasks. Furthermore, the structural model proposed in this thesis can be extended beyond object transponation and orientation. For example. this model can be used to investigate the structure of two-handed inputs in HCI. The controls of left hand and right hand can be treated as two processes md thus the concurrence and interdependence of the two processes can be examined.

Chapter 8 Conclusions, Limitations and Future Research

Thc current research on object manipulation in virtual cnvironmcnts is important for both

understanding human control systems and human-computer interaction. Four experiments were conducted to investigate human performance on mu1ti pie dimcnsionill object manipulation. Results from these cxpriments idcnti fy the structure of the object transportation ;md orientation processes. The results provide further understanding of the roles of haptic and visual information in human object munipulation in virtual environments. Findings from this research suggest insightful implications for user interface research and design. In addition, the structurctl model proposed in this dissertation provides a novel approach for the rcscurch on multiple dimensional hummcomputer intcmction.

Conclusions This study makes contributions to understanding the mechanisms underlying human performance in object manipulation, and providing implications for humm-computer interaction design. The highlights of the results are summarized briefly as follows: 1. Object transportation a d orientation processes have a parallel, interdependent

structurr. The object tnnsportrtion process contains the object orientation process and them fore is the critical path for the task completion time.

The parallel, interdependent structure of object transportation and orientation is persistent over various visual md haptic feedback conditions including domain constriints and contextual constrnints of virtual environments. Human performance on object manipulation is the result of the interplay bctween haptic and visual information presented in virtual environments. Humans achieve bcttcr pcrfammcc whcn thc haptic and visual infoimitiu~liua co~lsislent.

Contextual haptic md visual constraints play an important role in human perforrnancc on object manipulation. Especially, contcxtud heptic constmints have significant

impact on objcct manipulation speed. HCI design should accommodate the parallel, interdependent structure of objcct

transportation and orientation in context of the control space as well as the display space. Graphic displays should be designed in relation to haptic displays. Graphic cues should be augmented with haptic cues. Display space should be superimposed with the control space whenever possible. The structural model of decomposing object trinsportiltion and orientation processes and examining the concurrence and interdependence of the two processes can be

applied to investigate other mu1tiple humm-computer interaction processes.

Limitations

There are limits to the inferences from the results of the current research. Our experimental results on human performance were obtained in the Virtual Hand

Laboratory at Simon Fraser University. One important feature of the Virtual Hmd 146

Laboratory setup was that the display space was superimposed with the control spacc. Other virtual environments may have a separation between display and control spaces. We used very simple graphic objects (cursor and target cubes) and physical

objects (wooden and plastic cubes) for this study. In real world applications, complex griphic displays and various input devices may be employed. In addition, the task rcquircrncnts for the four experiments cm bc different from real world applications. Care must be taken when gcnemlizing and applying the results from this research for other computer systems and object manipulation tasks.

Future research A few aspects of object manipulation in virtual environments ire suggested by the current

research: 1. To explore other combinations of object manipulation dimensions. This study

focused on the object tmnslution on a horizontal plane md rotation around r vertical axis. Zhai and Milgram found an anisotropic feature of human performance in sixdegree-of-freedom trxking tasks (1997). For example, it will be interesting to see whether or not the structure of object tmnsponation and orientation processes will change in a task requiring on object to be translated md rotated about the same axis.

2. To quantify human performance in multiple dimensional object monipulrtion. Fitts' law has been successfully used as an engineering model for HCI design (MacKenzie, 1992). The current research suggests that human performance for object tnnsportation and orientation does not conform to the Fitts' low. However, it might

be possible to develop a generalized engineering model for multiple dimensional

object manipulation with Fitts' low as a sub-model for pointing tasks.

3. To examine the microstructure of object transportation and orientation kinematics. The current research revealed the overall structure of object transportation and orientation in terms of the time courses. Future work can be pursued to study detailed kinematic landmarks such as peak velocities 2nd to establish thc relationship bctwccn these landmarks for object transportation and orientation. 4. To investigate the contribution of different muscle groups to object transportiltion and

orientation processes. Data For this study were collected from the end-effector (controller)of object manipulation. It might be insightful to monitor how different muscles groups are coordinated to accomplish the multidimensional tasks. For example. EMG dutil from the hand and the arm can be collected md analyzed for object transponation and orientation tusks.

5. To study the coordination control among the herd. eye and hand movements in virtual environments. It was observed that there was a small head movement during object manipulation. Further examination of when and how thc head and eyes move in relation to the hand movement can provide insight into human motor control systems.

6. To test the hypothesis that the depth cues such as the checkerboard provided in virtual environments facilitate human object perception, but not object manipulation.

References

Arbib, M.A. (1985). Schemas for the temporil control of behavior. Hltrma Newubiology, 4.63-72. Xrbib, MA.,1k1-all.T,n ~ Lyu~~s, d D.( 1985).Coordinated control programs for

movements of the hand. In A.W. Goodwin and I.Darian-Smoth (Eds.), Hund Flolcrion and the Neocorte.r, 1 1 1- 129, Berlin: Springer-Verlag. Arthur, K.W., Booth. K.S.and Ware. C. (1993). Evaluating 3D task performance for fish tank vinurl world. ACM Trut~suc~iotu 011l~@ntrufia~l Sysrenls, 1 1 (3), 239-265.

Atkeson, G.G. and Hollerbsrch, J.M (1985). Kinematic features of unrestrained vertical arm movements. Joltmu1 uj'Ne~troscietrce,5 (9- lo), 13 18-2330. Bdukrishnan. R. md Kurtenbnch, G.(1999). Exploring bimanual camera control and object manipulaiton in 3D graphics interfaces. Proceeclings of the Conferetccu on H m a t l F m ~ o r si r ~Cotnpittitrg Systenis CHI '99 /A CM.56-63. Balakrishnim, R., Boudel, T., Kurtenbuch, G. and Fitzmaurie, G. (1997). The rockin' mouse: Integril 3D manipulation on a plane. Proceedings 4 r k e Corflerencr ot, Hitmajt Facrors bi Computing Systems CHI '97 /ACM. 3 1 1-3 18. Bemstein, A.J. (1966). Analysis of programs for pwdlel processing. IEEE Truns. Computers, 746-757. Boritz, J. (1998). The Effectiveness of Three-dimunrsiomd lriterucrion. Ph.D. Thesis, Dept.

of Computer Science, University of British Columbia, Vancouver, B.C., Cmado. Cao, C. G.L., MacKenzie, C. L. and Payandeh, S. (19%). Task and motion analyses in

endoscopic surgery. Proceedings afthe ASME Dynamic Systems and Control

Division, 58, 583-590.

Card. S.K.,English, W.K. and Burr. B .J. ( 1978). Evaluation of mouse, rate-controlled joystick, step keys and text keys for text selection on a CRT. Ergonomics, 21 ( 8 ) . 601-613. Card, S.K..M o m , T.P.and Newell, A. ( 1983), 7Ar Psycl~olugyof Hrmun-Comprcter hteructiorr . Hi llsdale, NJ: Lawrence Erl baum. Crossman, E.R.F.W. and Goodeve, P.J. (1983). Feedback control of hand movement and

Fitts' law. Q~turrerfy Jorinuil of'Erperinrert~ufP.sycltology. 35A. 25 1-278. Cunninghum. H. A. and Welch. R. B. ( 1994). Multiple concurrent visual-motor mappings: Implications for models of adoption. Jortnruf ofE.vp. Psych.: Clrt~un Percrptiut~a d Per/onm~icr.20 ( 5 ) . 987-999.

Desmurgct, M.. Pmbllmc. C., Arzi, M.. Rossetti. Y.and Poulignim. Y.(1996). Integrated control of hand transport and orientation during prehension movements. Erp. Bruin Rex, 110.265-278.

Ellis, S.R., Tyler, M.. Kim. W.S. and Stark. L. (1992).Three-dimensional tricking with

misalignment between display and control axes. SAE Trcirts. J. Aerospace, 100 ( 1 ), 985-989.

Fitts, P.M.(1954).The information capacity of the human motor system in controlling the amplitude of movement. Joltmu1 of E~perintentPsycliology. 47.38 1-391.

Fitts, P. M., and Posner. M. 1. (1967). Hwtwt Perfonrrmce. Belmont, C A : Brooks/Colc. Gamer. W. R. ( 1974). The Processittg of hl/ornrariort atid Stntctrtre. Hi l lsdale. NI : Lawrence Erlbaum. Gibson, I. J. (1962). Observations on active touch. Psyclmlogicul Review, 69,477-491 . Gibson, J.J. ( 1966). The Senses Considered as Perceptual Sysrents. New York: H.M.Co.

Goodale. M.A.. Jakobson. L.S and Servos. P. (19%). The visual pathways mediating perception and prehension. In Wing. A. M., Haggard, P. and Flanagun. J. R. (Eds.), Harzii arrd Brairi, 15-31, New York: Academic Press.

Graham. E.D.and MacKenzie. C.L. ( 1996).Physical versus virtual pointing. Proceedirigs of the Confere~zceo ~ Hicnmn i Fuctors in Computing Systems CHI '96 /ACM. 292-299.

Hinckley. K.. Tullio. J.. Pausch. R.. Proffitt. D. and Kassell, N. (1997).Usability analysis of 3D rotation techniques. Proceedirrgs uj' UISI' '97, i - 10.

Hoffman, H.G. ( 1998). Physical touching virtual objects using tactile qyrtentation enhances the realism of virtual environments. Procredirrgs uj'lEEE VRAIS. 59-63. Hutchins. E.

L..Hollw. 1. D. imd Norman D.A. (1986). Direct manipulation interfaces.

In D.A. Norman and S. W Draper (Eds.). U.wr Cerrtered Sytenr Desiy~r.87- 124. Nc w Jersey: LEA. Hwang, K. ( 1993).Arlvanced Corrrpittrr Architecture. New York: McGr~w-Hill. Jacob. R.I. K.. Sibert. L. E.. McFarlnne. D. C. and Mullen. M. P. Jr. (1994). Integrillity and separability of input devices. ACM Trut~scicrio~rs on Concpittrr-Hwrzun hteruction. 1 ( 1), 1-26.

Jakobson. L.S. and Goodale. M.A. (1991). h c t o n affecting higher-order movement planning: A kinematic analysis of human prehension. Eipen'taenrul Bmin Reseurck, 86, 199-208.

leannerod. M.(198 1). Intersegmental coordination during reaching at natunl visual objects. In J. Long and A. Baddeley (Eds.). Attentimi urtd Perfonnnrice I X , 153- 169. Hillsdale, NJ: Lawrence Erl baum. Jeannerod, M. (1984). The timing of natunl prehension movements. Journal of Motor Behavior, 16,235-254.

Jemnerod. M. (1986). The formation of finger grip during prehension: A cortically

mediated visuomotor pattern. Behavioral Brain Research, 19(2). 99- 1 16.

Jernnerod, M.( 1988). The Neural urrd Behaviorul 0rgani:ariort of Goal Directed

Movertwm. Oxford: Oxford Uni veni ty . Kabbnsh, P. and Buxton. W. (1995). The "prince" technique: Fitts' law md sclection using area cursors. Proceedings of tlw Cutftwrice 011 Htirttun Fucrors it1 Con~pcttirrg Systems CHI '9S/ACM,273-279. Kaczmarek, K.A. and Bach-Y-Rita. P. (1995). Tactile displays. In Bdield. W. and Fumess iii

T.A. (Eds.). Vimrctl Erlviron~trerltsarid Advurrcrd l~zrerfitcrDesign. 35 1 -

413, NY: Oxford University Press.

Karat. J., McDonald. J.. and Anderson. M. (1984). A comparison of selection techniques: Touch panel. mouse and key board. Proccedirrgs of 1NTERACT '84, 149- 153. Kawato, M., Uno, Y ., Isobe, M.. and Suzuki. R. (1987). A hiermhicd model for voluntary movement and its application to robotics. Proccrdirrgs ofthr First IEEE

blrenlurionul Corr/ermc*e011Nelirul Nenvork. 573-582. Keele. S. W.. and Posner, M. 1. (1968). Processing of visual feedback in ripid movements. Joiirnal of Experinied Psychology, 77, 155-158. Lindeman. R.W., Siben, J.L.and Hahn. J.K. (1999). Towards usable VR: An empirical study of user interfaces for immersive virtual environments. Proceedirrgs of tire

Curfirerice orr Httrnun Factors in Corttpcttir~gSysterrls CHI '9WACM.64-7 1. Pittsburgh, PA. Loomis, J.M. md Lederman. S.J.( 1986). Tactual perception. In Handbook of Himan

Perception and Performance, 1-41, NY: W i ley. MncKenzie, C.L.(1993). Muking contact: Target surfaces and pointing implements for

3D kinematics of humans performing a Fitts' task. Societyfor Neitrovcience Abstracts, 18, 5 15.

MacKenzie, C.L. md Iberdl, T.(1994).The Grasping Hand. Amsterdam: North-Holland.

MacKenzie. C.L. and Marteniuk. R.G., (1985). Motor skill: Feedback, know ledge, md structurul issues. Canadiutl Jortnial of Psyclrology, 39(7),3 13-337. M;~cKenzie.I.S. (1993). Fitts' Law us o research and design tool in human-computer interaction. Trurtr ocl Humart-Cornpirter Ir,teraction. 7.9 1- 139. MacKenzie, I.S., Soukoreff,. R.W. and Chris. P. (1997). A two-bull mouse affords three degrees of freedom. Ertertded Abstrctcrs of the Co?tj2rtvtc*eon Hwrtut~Fucturs ict Cun~pitiitigSys~rrtisCHI '97 / X M , 303-304. Marteniuk, R.G.. MacKenzie. C.L.. and Leavitt. J.L ( 1990). The inadequacies of a straight physical account of motor control. In The Nrrtwd-Plysicul Appruciclr to Movmtecrt Cotrtrol. 95- 1 15, Amsterdam: Free University Press. Mithul. A. K.and Douglas, S. A. ( 1996). Differences in movement microstructure of the

mouse md the finger-controlled isometric joystick. Proceedici,lgsoffhe ACM Hrrctto~t Funors bt CocecpirferSytcnrs CHI '96 /ACM. New York: ACM.

Noman. D.A. (1988). The Psyc.ltolugv of Everydq Things. New York: Basic Books. Norman. D.A. md Drdper S. W. ( 1986). User Ceritered Syterrt Design. NJ: LEA. Palastanga. N.. Field. D. and Sonmes, R. ( 1994). Anutorry arrd Hitmu~tMoveavnr: Sfnrcftrrcl artd Fwictions. Oxford: Butterworth-Heincmann.

Pltulignan. Y. and Jcannerod. M. ( 1996). Prehension movements - the visuomotor channels hypothesis revisited. In Wing. A. M.. Haggard, P,and Flmagm. J. R. (eds.),

Hund and Brairi, 265-282, New York: Academic Press. Paulignan, Y., Jeannerod, M., MacKenzie, C.L.,and Marteniuk. R.G. (1991). Selective

perturbation of visual input during prehension movements. 2. The effects of changing object size. Erperimerttul Brairz Research, 87,407420.

Posner, M.1, Nissen, M.J and Klein, R. (1976). Visual dominance: An informationprocessing account of its origins and significance. Psychological Review, 83, 157171.

Rasmussen. J ( 1986). bl/omlation Processes cud Huma~l-Muchineblteracfiotl. Amsterdam: Elsevier North-Holland. Rasrnussen. J . (1990). Skills. rules. and knowledge; signals, signs. and symbols, and other distinctions in human performance models. In Michael Venturino (Eds.). Selected Readings in H~mrunFuctors. Human Factors Society. 61-70. Robertson, G.. Czerwinski. M. and Larson, K. (1998). Data mountain: Using spatial memory for document management. PruCeedi,cg.s :sol'UIS1' 'YX/ACM. 1 53-161.

Rosenbaum. D. A., Vaughan. I . , Barnes, H. I. and lorgensen. M. 1. (1992). Time course of movemcnt plilnni ng: Selection of handgrips for object manipulation. Journal of

Erp. Psyclr.: Lrcmlittg. Me~~roq: uttd Cog~lition.1 8 (5). 1058- 1073. Rothwell. J.C.. Traub. M.M.. Day. B.L., Obeso. I.A., Thomas. P.K. and Marsden. C.D. (1982).Manual motor performance in a deafferented man. Brabr. 105,s 15-542.

Schmidt. R. A. ( 1988). Motor C m ~ r oand l Luumitlg: A Buhuviorul Enipi~usis. Champaign, IL:Human Kinetics. Schmidt. R. A.. Zelunik, H. N., Hawkins. B.. Frank, J. S.. and Quinn. l. T. (1979).

Motor-output vari;lbility : A theory for the accuracy of rilpid motor acts. Psvcl~ologicul Review, 86,4 15-45 1. Servos. P., Goodale. M.A., and Jakobsen, S. C. (1992). The role of binocular vision in

prehension: A kinematic analysis. Visior~Research. 32 (80). 1513-152 1. Shneidermnn. 8. (1983). Direct manipulation: A step beyond programming languages, IEEE Computer. 16 (8). 55-78.

Shneidermm. B. (1992). Designing the User Interface. New York: Addison-Wesley. Sivac,

B.and MacKenzie, C.L. (1992). The contributions of peripheral vision and centml

vision to prehension. Vision ctnd Motor Control. 233-259. Amsterdam: Elsevier Science Publishers.

Soechting. J. F. md Flmders, M. (1993). Panllel, interdependent channels for location and orientation in sensorimotor transformations for reaching and grasping. Jounlal of Neuroplrvsiology. 70 (3). 1 137-1 150. Soechting, J. F. and Tong. D. C. and Flanders, M. (1996). Frames of reference in sensorimotor integration - Position sense of the ivm and hmd. In Wing, A. M., Haggard, P. and Flanogan. I. R. (Eds.). Hard u~tdBrui~~, 15 1- 168. New York: Academic Press. Srinivusim. M.A. (1994). Haptic interfaces. In N.I. Durlach and A.S. Mavor (Eds). Virtrral Reuliry, Scientijic u~rdTecl~tiiculClud1ertge.s. 16 1 - 1 87. National Academy Press. Summers, V.A., Booth. K.S.. Calvert. T.. Gnhnm. E. and MxKenrie, C.L. ( 1999). Cali bmtion for augmented reuli ty experimental test beds. ACM Syt~po.sirottcon bitcruc-rive 30 Gruphics, 155- 162. Wallace, S.A. and Weeks, D.L.(1988). Tempord constriints in the control of prehensile movement. Jortnlul of Motor Behuviortr, 20 (7). 8 1- 105. Wang, Y., Das, 8. and Sengupta, A.K. (1999). Normal horizontal working area: The concept of inner boundary. Ergonorrtics. 42 (4). 638-646. Wang, Y. and MacKenzie. C.L.(1999a). Object manipulation in virtual environments: of tie Con/ere~rceotl Huntan Factors hi Relati ve size mat ten. Proceedi~~gs

Cot~rputittgSyste~risCHI '99 /ACM, 48-55. Wang. Y. and MacKenzie. C.L.( 1999b). Effects of orientation dispui ty between haptic

and graphic displays of objects in vinuul environments. INTERACT '99.

Wang. Y.. MacKcnzie, C. L. md Summers. V. (1997). Object mmipulation in virtual environments: Human bias. consistency md individual differences. Extended Abstracts of the Cotferer~ceon Human Factors in Conptitcg Systems CHI '97 /ACM, 349-350.

Wang, Y., MacKenzie. C.L..Summers. V. and Booth. K.S.(1998). The structure of object trimsportation and orientation in human-computer interaction. Proceedings of the Cotference on Hltmun Factors it1 Cor?rptttirigSystems CHI '98/ACM,3 1 2-3 19. Ware, C. (1990). Using hmd position for virtual object placement. The Visuul Cot~ipctfrr. 6 , 245-253.

Ware. C . ( 1998). Real hmdles. virtual i mages. Simmury of rile Co~l/rre~lce on Hltmurl Fucturs iri Compirhg Swtmrs CHI '96/rlCM,235-236.

Welford. A.T. (1976). Skilled Prrfon~totrce.Glenview. Illinois: Acott. Foreman and Company.

Wickens. C.D.( 1999). Eugitwmri~~g Psycitology arid Hltrtcitt Prrfon~crrice.Columbus. OH: Harper Collins.

Wing, A.M. Turton, A,, and Friser. C. (1986). Grasp size md accuriicy of approach in reaching. Jo~tnirilof Motor Buhuvior. l8(3), 245-160. Woodworth, R.S. (1899). On the xcuriicy of voluntary movements. Psyc~lrologicrrl Review Moriogruph (Suppl. 3). 1- 1 14. Zhni. S. (1995). Humutl Perjbn~iancein Six Degree of Freedow hput Co~itrol.Ph.D.

Thesis, Dept. of Computer Science, University of Toronto. Toronto, Ontario. Canada. Zhai. S. and Milgram, P. (1 993). Human performance evaluation of manipulation of the First IEEE Vinltul Reulity schemes in virtual environments. Procredi~~gs

Amtral Inteni~tiorlalSymposiurrl, 1 55- 1 6 1.

Zhai. S, and Milgram. P. (1997). Anisotropic human performance in sin degree-of-

freedom tracking: An ev~lu~tion of three-dimensional display and control interfaces. IEEE Transactions on Systems, Man,and Cybenietics-Part A: Systems and Hiottlu~is. 27.5 18-528.

Z h i , S., Milgmm, P.and Buxton, W.(1996). The influence of muscle groups on

performance of multiple degreesf-freedom input. Proceedings of the Con/erecice orr Huntan Factors in Cornputi~gSysfsrents CHI '96 /ACM. 308-3 15.

Suggest Documents