Projektpraktikum: Golf Swing Analysis with Event Based Vision

Projektpraktikum: Golf Swing Analysis with Event Based Vision ABSCHLUSSBERICHT von Bo Wang (3635592) Kai Stiens (3635886) Lehrstuhl f¨ ur STEUERUNGS-...
6 downloads 2 Views 919KB Size
Projektpraktikum: Golf Swing Analysis with Event Based Vision ABSCHLUSSBERICHT von Bo Wang (3635592) Kai Stiens (3635886)

Lehrstuhl f¨ ur STEUERUNGS- und REGELUNGSTECHNIK Neuroscientific System Theory Technische Universit¨at M¨ unchen

Betreuer: Abgabe:

Dipl.-Math. David Weikersdorfer 18.06.2013

2

3

CONTENTS

Contents 1 Introduction 1.1 DVS Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5 5 7

2 DVS camera setting and recording Data

9

3 Data Processing 3.1 How does the data look like? . . 3.2 Algorithm . . . . . . . . . . . . 3.2.1 Event Classification . . . 3.2.2 Ball Trajectory Analysis 3.2.3 Result of the Algorithm

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

13 13 14 15 16 19

4 Golf ball trajectory calculation and visualization 4.1 Numerical calculation of the golf ball driving trajectory 4.1.1 Air resistance . . . . . . . . . . . . . . . . . . . 4.1.2 Magnus force . . . . . . . . . . . . . . . . . . . 4.1.3 Assumptions and Calculating equation . . . . . 4.2 Visualizations of the hit and the golf ball movement . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

21 21 21 23 24 25

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

5 Summary

27

List of Figures

29

Bibliography

31

4

CONTENTS

5

Chapter 1 Introduction Playing golf was once a royal sport. Nowadays it becomes more and more popular. Everyone can play golf, all you need is a golf club, a golf ball, and a nicely cut grass. You swing the club, the club head hits the ball, the ball flying away, and then the question comes: Is that a nice shot? Beside the accuracy, people are dying to find out a way to let the golf ball driving as far as it can (fig. 1.1).

Figure 1.1: Record long drive [m] (see [KTTH]) For a long drive, player needs a good golf skill, and to improve the golf skill, an analysis is acquired. By a modern golf play analysis, high-speed camera, integrate motion sensor in club head, and even radar is used, but they are normally expensive and inconvenient to carry. So, how can a golf play be analyzed with a nice but fair equipment?

1.1

DVS Sensor

Dynamic Vision Sensor (DVS) or asynchronous temporal contrast silicon retina works like a human retina (see [IoN] for further information). It transmits only the local pixel-level changes caused by movement in a scene at time they occur. Unlike the conventional vision sensors, which need enormous hardware memory and computing power to record the world, the DVS needs only to record the changing

6

CHAPTER 1. INTRODUCTION

Figure 1.2: Dynamic Vision Sensor

events. And this results a stream of events at microsecond time resolution.

Figure 1.3: Time resolution compare: conventional camera vs. dynamic vision sensor

Nowadays it is widely used in fast robotics, motion analysis, particle tracking, microscopy, etc.

7

1.2. GOALS

Array size 128 × 128

Power consumption Chip: 23mW at 3.3V

Response latency 15 microsec

Timing precision 1 microsec

Table 1.1: Specifications of DVS

1.2

Goals

With the dynamic vision sensor technology, we will to try to analyze a golf play directly after the swing. After that we will try to simulate the flying trajectory of the golf ball and show the basic information like the drive long, the shooting angle and the maximal flying height. With this information, the golf player should be able to try a different swinging technique, shoot the ball with different standing position and be able to improve his skill on his own.

8

CHAPTER 1. INTRODUCTION

9

Chapter 2 DVS camera setting and recording Data While recording data on a golf field with professional golf players following errors in the recorded data occurred: As shown in figure 2.1, in the left parts of the figure is

Figure 2.1: Camera error the golf club still clearly visible. Then the errors occurred (middle and right parts). It is because of the occurring events at the moment are so much that the sensor unable to gather all events. In order to find out the limits of the performance of the sensor, one experiment is designed. Figure 2.2 shows a remote control helicopter with variable rotation speed. The extra black-white pattern is used in order to reproduce the error situation easily. The data is recorded from above with increasing of the rotation speed. The event rate limit of the DVS is approached (Figure 2.3). It appeared that the maximal performance of the DVS is around 2M E/s. That means, in order to get a useful data, it should: 1. Prevent recording the unnecessary information like body movement of the golf player. The sensor is placed behind the player with focus just on the ball and the club. As shown in figure 2.4

10

CHAPTER 2. DVS CAMERA SETTING AND RECORDING DATA

Figure 2.2: Camera performece experiment

Figure 2.3: Camera performece experiment: result 2. Changing the sensor sensitivity in order to get fewer events during the movement. A suitable configuration (adjusts event threshold) for the sensor is shown in figure 2.5. It produces an events rate below 600kE/s at a golf ball driving speed around 180km/h. 3. Find out recording situation which produce less events rate. Two situations of data gathering are considered: indoor and outdoor recording. As result shown that, with a golf ball driving velocity of 160km/h in the indoor situation an event rate of 760kE/s is produced, which in the outdoor situation with the same field of view and velocity only an events rate 360kE/s is produced.

11

Figure 2.4: Camera position

Figure 2.5: eDVS setting

12

CHAPTER 2. DVS CAMERA SETTING AND RECORDING DATA

13

Chapter 3 Data Processing After we discussed the challenges of the data aquisition in the last chapter, we will now take a look at the possibilities to process the data and extract useful information. Therefore we will first inspect the captured data in a visual way to get a first impression about what can be extracted easily and what maybe not (3.1). In the second subsection (3.2) of this chapter we will explain what information we were able to extract and how the algorithm works.

3.1

How does the data look like?

Unlike normal video data, the captured data is not interpretable for humans directly because it consists of an asynchronous stream of events that are saved as the position of the activated pixel in X and Y coordinates, the polarity and a timestamp. The polarity in this case determines whether a change from dark to bright (polarity is “on”) or from bright to dark (polarity is “off”) has taken place. This data can be displayed as images or videos by taking a set of events and colouring the corresponding pixels in an image according to the polarity. Hereafter events with “on” polarity are coloured green while “off” events are coloured red. The events can be chosen either by taking a constant amount of events or by taking all events contained in a fixed period of time. The resulting pictures are called slices. Figure 3.1 shows a slice when the club is already in the field of view but has not hit the ball yet. As we can see, the shaft of the club is clearly visible as a line but it is not possbile to recognize the head of the club. We can also see that the club mainly causes “off” events. This is because the club was a mostly dark coloured one. This can change when other clubs are used. Figure 3.2 shows a slice when the club already hit the ball. So in this picture the ball is visible additionally. The front of the ball causes only “on” events while the back of the ball does not cause a significant amount of events. Also the ball does not cause events in a such specific shape like the club does.

14

CHAPTER 3. DATA PROCESSING

20

40

Y

60

80

100

120

20

40

60

80

100

120

X

Figure 3.1: Slice of events before the club hits the ball In both slices some noise is visible but not in an amount that impairs our interpretation of the images as a human.

20

40

Y

60

80

100

120

20

40

60

80

100

120

X

Figure 3.2: Slice of events after the club hit the ball

3.2

Algorithm

This chapter will now explain the algorithm developed in this project step by step. The first step to extract useful information from the captured data is to classify the events. This means that we need to know what the events were caused by before further analysis. As we have seen, almost all events in our data are caused by either

15

3.2. ALGORITHM

the club or the ball. So we will try to group the events in at least two groups. Those which were caused by the club and those which were caused by the ball. Unfortunately we can not be sure of the colour of the club so we can not make use of the polarity information. Also the ball only causes events on one side so we can not extract any motion information from the polarity. Therefore we pay no further attention on the polarity of the events. The upcoming subsection will describe how the algorithm classifies the events.

3.2.1

Event Classification

A good starting point is the very distinguishing shape of the club shaft because it is a straight line that is visible very well. For simplicity and efficiency reasons the coordinates in the image space are centered before analysis of the data. From now on, the range of both x and y is [−64, 63] instead of [0, 127]. A very good tool to recognize lines is the Hough transform (see [Har09]). The Hough transform transforms a point with the coordinates x and y in the image space into a parameter space of angles α and corresponding distances d that define all possible straight lines that intersect the given point. The parameter equation is given below: d(α) = x ∗ cos(α) + y ∗ sin(α)

(3.1)

Every single point results in a sinusoid in the Hough space. And a set of point on the same straight line results in a set of sinusoids that intersect in the point (α, d) that specifies the underlying straight line. Figure 3.4 shows an example of sinosoids in the Hough space. To find this intersection in a practicable manner the Hough space is divided into a discrete grid called voting matrix V. Every time when a sinusoid crosses a cell the value of the cell is increased. The cell with the highest value corresponds to the set of parameters were most of the sinusoids intersect and therefore the most probable estimation of the underlying straight line. The gridbased solution increases robustness and decreases computational complexity. It is also possible to constrain the range of the parameters to search only for relevant solutions of the estimation problem. In our case the angle αclub of the club shaft is constrained to ±20◦ with respect to a vertical line. Usually αclub does not exceed ±10◦ with respect to a vertical line. Our estimation of the club shaft should also reflect the continuity of both parameters αclub and dclub of the real golf club. To take this into account we don’t calculate the hough transform independently for a set of slices but as an incremental process. For every new event p at time step k we calculate the Hough transform and the corresponding votes P and add them to the voting matrix V after discounting all old votes with a decay factor γ ∈ (0, 1): Vk = P + γ ∗ Vk−1

(3.2)

16

CHAPTER 3. DATA PROCESSING

When our estimation of the club shaft is good enough (the maximum of the voting matrix exceeds a defined limit) we ignore outliers that lie too far away from our current estimation. We also classify all events that match our estimation (that are not outliers) as caused by the club. Figure 3.3 shows the estimation of αclub for an example data set. It shows the raw estimation as points for every new sample and also a postprocessed (low pass filtered) version including an intervall showing two times the standard deviation in both directions.

Estimated angle in degrees

173 172

Filtered estimation 2 sigma intervall

171

Angle estimations

170 169 168 167 166 165 164 0

1000

2000

3000 Index of the current sample

4000

5000

6000

Figure 3.3: Estimation of the angle of the club shaft As we saw in 3.1, the club and the ball cause mostly all events, so all outliers according to our estimation of the club are most probably caused by the ball. We can also use the constraint that events caused by the ball must be in front (in our case left) of the club to reduce the errors during classification further. So we classify all events that are outliers left of our estimation as probably caused by the ball (“ball event candidates”). The blue crosses in Figure 3.7 show all events in the example data set classified that way. After the classification the hough transform can also be used to extract angle and width of the trajectory of the ball. This will be the topic of the next subsection.

3.2.2

Ball Trajectory Analysis

To analyze the ball trajectory all points that we classified as candidates of the ball are transformed into Hough space, cumulated in a voting matrix and evaluated once for the whole data set. So, unlike in equation 3.2, old values are not discounted but just summed up. Figure 3.4 shows an example of a voting matrix of “ball candidate events”. (Again the range of the angle is limited to get only relevant results. It is limited to 0◦ − 90◦ with respect to the ground.) In this case the basic geometric problem we want to solve differs a bit. We don’t

17

Alpha

3.2. ALGORITHM

d

Figure 3.4: Event candidates for the ball in the Hough plane want to find one single straight line but the biggest set of colinear lines. So we don’t simply search the maximum within the voting matrix. We mark a certain range of values (for example values bigger than 0.8 times the maximum) as belonging to the maximum and search for the α with the most cells that match this criterium. This gives us directly the angle αtraj of the trajectory of the ball. An evaluation of the width of the maximum for the found αtraj along d gives us the width of the trajectory, namely the diameter of the ball in the image. This diameter given in pixels can be used to calculate the scale factor pixels/mm, if it is unknown, because the real size of the ball is known (42.7mm). The central value of the maximum is the estimated center of the trajectory dtraj . The next step is to extract the speed of the ball. As the first step, we record a time stamp matrix T. This Matrix contains the time stamps for every pixel in image space (x, y) when it’s activation was caused the first time by an event classified as a “ball event candidate”. From this matrix we collect all events that lie in the trajectory of the ball. To do this we calculate d with equation 3.1 for every pixel with a timestamp and the extracted angle of the trajectory αtraj to decide if the d also lies in the range we calculated for the trajectory. If this is the case we calculate the position r of the pixel along the trajectory: r=

d ∗ cos(α) − x sin(α)

(3.3)

Figure 3.5 shows a plot with the time stamp on the x-axis and the position r along the trajectory of the pixel on the y-axis. We can observe a well visible linear correlation of a movement with constant speed. The speed of the movement is described by the slope of the underlying line. The slope can be extracted by linear regression.

18

CHAPTER 3. DATA PROCESSING

In this case a robust regression method (see [HW77]) must be used because time stamps caused by noise are not part of the linear process we model and can have a significant deviation that impairs the result of normal linear regression methods like “least squares”. The red line in figure 3.5 shows the result of a robust regression. The slope of the regression line directly gives us the speed of the ball in pixels/µs that can be converted to km/h easily either by calibration of the field of view or by estimation of the scale factor we already discussed.

70

DIstance travelled by the ball in pixels

60

Dataset Robust fit

50 40 30 20 10 0 −10 −20 1000

2000

3000

4000 5000 6000 Timestamp in microseconds

7000

8000

9000

Figure 3.5: Interpretation of the speed of the ball as the slope of a line

We also want to extract the point where the ball causes the first events of the trajectory to determine where the ball was before the hit and the point in time when the club hit the ball. The trajectory of the ball causes a very stable density of events along r so we integrate the number of events along r. We do this simply by sorting the values of r in an ascending manner and plot them over an ascending index starting from “1” with a step size of “1”. As we see in figure 3.6, the trajectory causes a straight line that can again be extracted by robust regression. In the next step we search the first n = diameter points that lie on the line. This ensures that we get the point where the complete front of the ball has caused events for the first time. For these points we save the corresponding mean of the r values rhit and the time stamp thit . We can subtract diameter/2 from rhit to get the center of the ball and convert it back to x and y-coordinates with equation 3.3 and 3.1. The values of α and d are given by the estimation of the trajectory αtraj and dtraj . With thit we can find the estimated position (αclub , dclub ) of the club for the moment of the hit. To sum up all results of the algorithm the last subsection will show a visualization that was created automatically with the algorithm.

19

3.2. ALGORITHM

500 450

Integral of number of events

400 350 300 250 200 150 100 50 0 −20

−10

0

10

20

30

40

50

60

70

r

Figure 3.6: Integral of the number of events along r

3.2.3

Result of the Algorithm

Figure 3.7 shows the extracted data visualized in one picture. The blue crosses show the real events that were classified as “ball candidate events”. The blue lines show the upper, center and bottom estimation of the ball trajectory. The estimated position of the ball in the moment of the hit is drawn as a black circle. The red crosses show a set of real events that matched our estimation of the club in the moment of the hit (red line).

−60

−40

Y

−20

0

20

40

60 −60

−40

−20

0 X

20

40

60

Figure 3.7: Data extracted by the algorithm visualized (speed: 174.5 km/h)

20

CHAPTER 3. DATA PROCESSING

21

Chapter 4 Golf ball trajectory calculation and visualization This chapter is split up into two sections. The first will explain what mathematical models are used to compute the whole trajectory of the golf ball with the values extracted by the algorithm from the captured data. The second section will present the current visualization of the results for the user.

4.1

Numerical calculation of the golf ball driving trajectory

To compute the whole trajectory of the ball several phenomena must be taken into account. They will be explained in the next two subsections. After that, the third subsection will give an overview of the whole set of equations that is solved to compute the trajectory.

4.1.1

Air resistance

The force which exerted on an object by air is called air resistance or drag force FD :

FD =

1 · ρ · v 2 · CD · A 2

where ρ is the density of the air v is the speed of the object relative to the air A is the cross-sectional area CD is the drag coefficient a dimensionless number

(4.1)

22

CHAPTER 4. GOLF BALL TRAJECTORY CALCULATION AND VISUALIZATION

The drag coefficient is determined by the different shape of the moving object and the speed of the movement. The Reynolds number influences CD . The Reynolds number can be described as the following function: Re =

ρ·v·d , with η = ϑ · ρ η

(4.2)

where ρ is the density of the liquid v is the mean velocity of the object relative to the fluid d is a characteristic linear dimension η is the dynamic viscosity of the fluid ϑ is the kinematic viscosity of the fluid The drag coefficient changes along the blue curve “golf ball with dimple(present)” which is shown in figure 4.1.

Figure 4.1: Reynolds number vs drag coefficient (see [KC])

23

4.1. NUMERICAL CALCULATION OF THE GOLF BALL DRIVING TRAJECTORY

The drag coefficient of the golf ball remains constant CD = 0.23 for Re ≥ 0.9 × 105 , which at following condition: Table 4.1 at a velocity v = 32m/s or v = 116km/h. Air Pressure Temperature Dynamic viscosity Value 101.325 20 18.232 ◦ Unit [P a] [ C] [10−6 P a · s]

Kinematic viscosity 15.33 −6 2 [10 m · s]

Table 4.1: Atmosphere condition Therefore, the values of the drag coefficient vs. velocity can be determined (table 4.2). Re v CD

≥ 0.9e+5 ≥ 116km/h 0.23

0.8e+5 103km/h 0.28

0.7e+5 90km/h 0.31

0.65e+5 84km/h 0.35

0.6e+5 77km/h 0.4

0.55e+5 71km/h 0.45

≤ 0.5e+5 ≤ 64km/h 0.50

Table 4.2: Drag coefficient vs. velocity The air resistance should be considered in a three dimensional way in the calculation (equation 4.3).  2   vx Fx Fz  = 1 · ρ · CD · A · vz2  (4.3) 2 vy2 Fy

4.1.2

Magnus force

The reason for the Magnus force is the pressure difference on the surface of a moving object. For a golf ball, due to the spin of golf ball itself, there is a pressure difference on the different parts of the golf ball. Therefore the “Magnus force” should be considered in the simulation (for further information see [unk05]). The Magnus force can be described in following equation: FM agnus = S0 · [~ω × ~v ] S0 =

1 · CM · ρ · A 2

(4.4) (4.5)

where S0 is a numerical coefficient that depends on the size of the ball and density of air CM is the Magnus coefficient ω is the angular velocity of the ball

24

CHAPTER 4. GOLF BALL TRAJECTORY CALCULATION AND VISUALIZATION

Figure 4.2: Magnus force (see [KTTH]) Because golf balls have no patterns with high contrast on it the spin rate is not 0ω ∼ possible to determine. In SI units for a typical golf drive we assume Sm = 0.25 , where ω = | ω| (see [unk05]). Since the Magnus force has a significant influence, we will assume a backspin of ωy = 110rad/s as a typical value for a golf drive. For a three dimensional calculation the formula looks like this: x z y S0 = ωx ωz ωy vx vz vy

(4.6)

=S0 · [(ωz vy − ωy vz )x − (ωy vx − ωx vy )z + (ωx vz − ωz vx )y]

The Magnus force can be calculated with the following equation: FM x =S0 ω(ωˆz vy − ωˆy vz ) FM z =S0 ω(ωˆy vx − ωˆx vy ) FM y =S0 ω(ωˆx vz − ωˆz vx )

4.1.3

(4.7)

Assumptions and Calculating equation

The following table contains an overview of our assumptions: name shortcut Value Unit

gravity g 9.81 [m2 /s]

golf mass m 45.93 [g]

golf cross-sectional area A 1.43 × 10−3 [m2 ]

air density ρ 1.204 [kg/m3 ]

numerical coefficient S0 0.00002 −

Table 4.3: Assumptions With the assumptions from table 4.3 the netforce of the driving golf ball can be calculated with equation 4.8.

4.2. VISUALIZATIONS OF THE HIT AND THE GOLF BALL MOVEMENT

ρA S0 ω dvx = −CD vvx + (ωˆz vy − ωˆy vz ) dt m m X dvz ρA S0 ω Fz =mz ′′ ; z ′′ = = −CD vvz + (ωˆy vx − ωˆx vy ) dt m m X dvy ρA S0 ω Fy =my ′′ ; y ′′ = = −CD vvy + (ωˆx vz − ωˆz vx ) − g dt m m

X

4.2

25

Fx =mx′′ ; x′′ =

(4.8)

Visualizations of the hit and the golf ball movement

Following pictures show an example of a golf play with a velocity of 177km/h, a golf driving angle of 9.5◦ and an angle between club and a vertical line of 12◦ . The velocity is visualized by the red arrow in figure 4.3. The golf ball flying trajectory with simulated results are shown in the last two figures.

Figure 4.3: Animation of the hit

26

CHAPTER 4. GOLF BALL TRAJECTORY CALCULATION AND VISUALIZATION

Figure 4.4: Animation of the flight

Figure 4.5: Visualization of the trajectory

27

Chapter 5 Summary At the end of this project we can sum up our experiences and have a brief outlook at further room for improvements and possibilities. During the first tries to capture data we experienced some difficulties with too high event rates so the DVS sensor was not able anymore to record all events. After increasing to field of view and decreasing the sensitivity of the sensor we were able to decrease the maximum event rate below 600kE/s at about 180km/h ball speed. The limit of the sensor is above 1500kE/s so this should give us enough headroom to record even professional golf players with drives close to 300km/h. Conventional cameras that are capable of recording fast movements like that are very expensive and need a lot of computing power to process the huge amount of data they produce while the DVS sensor is inexpensive and the captured data can even be handled by simple embedded systems like ARM microcontrollers. The prototype algorithm developed in this project shows that several useful parameters of the golf swing can be extracted. Unfortunately it was not possible to record a reference measurement with a professional golf swing analysis system to show how precise our algorithm is but pictures like 3.7 show that some results are quite decent. For example: the estimation of the angle of the trajectory works robustly with sufficient precision. As figure 3.5 shows, the data we estimate the ball speed from is very clean so the speed estimation in the image space should be very precise. The dominating error will be introduced when the scale factor for the conversion into real speed is estimated according to the diameter of the ball. Typical ball diameters are about 10pixels so we have to expect errors of at least 10 − 20%. This error could be reduced with a loss of flexibility when the system is calibrated for a fixed field of view. The speed of the ball and the angle of the trajectory are the most important of the extracted parameters to provide the golf player with feedback. Currently, the estimation of the club position is not that precise (fig. 3.3) but sufficient for the classification of the events. To get a more valuable estimation more sophisticated algorithms (e.g. Kalman filter) could be combined with the Hough transform in the future. More interesting parameters we were not able to extract in our project are the spin

28

CHAPTER 5. SUMMARY

of the ball and the direction in which the ball flies. But this does not mean that it is not possible in general. An estimation of the spin could be possible using special ball with a pattern on it that makes the spin visible for the DVS. And with a system of two DVS sensors the direction could also be captured. To summarize the further potential of the project: First, this is a proof of concept but we can already see that on the one hand there is still a lot room for improvement and possibilities to expand the system to get a more valuable analysis of the golf swing. And on the other hand this project shows that it is also possible to build a simple system with just one DVS sensor that does not special environmet or a calibrated setup. In such systems the low cost of the DVS sensor plays an important role because with just a DVS sensor and a microcontroller a simple gadget could be built. This gadget could use a MEMS accelerometer like they are used in mobile phones as a virtual horizon and bluetooth to display a simple visualization of the data on a smartphone.

29

LIST OF FIGURES

List of Figures 1.1 1.2 1.3

Record long drive . . . . . . . . . . . . Dynamic Vision Sensor . . . . . . . . . Time resolution compare: conventional sensor . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . camera vs. . . . . . . .

. . . . . . . . . . . . . . . . . . . . dynamic vision . . . . . . . . . .

5 6

2.1 2.2 2.3 2.4 2.5

Camera error . . . . . . . . . . Camera performece experiment Camera performece experiment: Camera position . . . . . . . . . eDVS setting . . . . . . . . . .

. . . . . . . . result . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

9 10 10 11 11

3.1 3.2 3.3 3.4 3.5 3.6 3.7

Events before hit . . . . . . . . Events after hit . . . . . . . . . Estimation: Angle of club shaft Ball trajectory in Hough plane . Regression: Speed of the ball . . Integral of events along r . . . . Extracted data . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

14 14 16 17 18 19 19

4.1 4.2 4.3 4.4 4.5

Reynolds number vs drag coefficient Magnus force . . . . . . . . . . . . Animation of the hit . . . . . . . . Animation of the flight . . . . . . . Visualization of the trajectory . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

22 24 25 26 26

6

30

LIST OF FIGURES

BIBLIOGRAPHY

31

Bibliography [Con]

Conradt, J., Tevatia, G., Vijayakumar, S., & Schaal, S. (2000). On-line Learning for Humanoid Robot Systems, International Conference on Machine Learning (ICML2000), Stanford, USA.

[Den]

Denk C., Llobet-Blandino F., Galluppi F., Plana LA., Furber S., and Conradt, J. (2013) Real-Time Interface Board for Closed-Loop Robotic Tasks on the SpiNNaker Neural Computing System, International Conf. on Artificial Neural Networks (ICANN), p. 467-74, Sofia, Bulgaria.

[Har09] Peter E. Hart. How the hough transform was invented. IEEE Signal Processing Magazine, pages 18–22, November 2009. [HW77] PW Holland and RE Welsch. Robust Regression Using Iteratively Reweighted Least-Squares. Communications in Statistics Part A-Theory and Methods, 6(9):813–827, 1977. [IoN]

UZH Zurich Institute of Neuroinformatics. Dynamic vision sensor (dvs) - asynchronous temporal contrast silicon retina. http://siliconretina.ini.uzh.ch/wiki/index.php. [Online; accessed 15June-2013].

[KC]

Jooha Kim and Haecheon Choi. Aerodynamics of a golf ball with grooves.

[KTTH] Kattenhorn, Toprak, Tscherkaschin, and Hillebrand. Mathematisches modellieren: Flugbahn eines golfballes. [unk05] unknown. Simulating the motion of a golf ball. www.physics.udel.edu, 2005.