Agenda. We ve discussed

Agenda We’ve discussed C++ basics Some built-in data structures and their applications: stack, map, vector, array The Fibonacci example showing the im...
Author: Leon Floyd
17 downloads 0 Views 748KB Size
Agenda We’ve discussed C++ basics Some built-in data structures and their applications: stack, map, vector, array The Fibonacci example showing the importance of good algorithms and asymptotic analysis Now Growth of functions Asymptotic notations After that Recurrence and iteration Recurrence relations & a way to solve them

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

1 / 30

Outline

1

Understanding growth of functions

2

Asymptotic notations

3

“Advanced” stuff

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

2 / 30

Conventions

lg n = log2 n log n = log10 n lnn = loge n

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

3 / 30

We would like to

Understand the growth rates of functions that measure run-time and memory usage of data structures and algorithms, wrt input size.

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

4 / 30

We would like to

Understand the growth rates of functions that measure run-time and memory usage of data structures and algorithms, wrt input size. f : N+ → R+

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

4 / 30

We would like to

Understand the growth rates of functions that measure run-time and memory usage of data structures and algorithms, wrt input size. f : N+ → R+ Examples: f (n) = 50n2 , g (n) = 31 log n

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

4 / 30

We would like to

Understand the growth rates of functions that measure run-time and memory usage of data structures and algorithms, wrt input size. f : N+ → R+ Examples: f (n) = 50n2 , g (n) = 31 log n

Measure time/space complexity independent of computer architecture

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

4 / 30

We would like to

Understand the growth rates of functions that measure run-time and memory usage of data structures and algorithms, wrt input size. f : N+ → R+ Examples: f (n) = 50n2 , g (n) = 31 log n

Measure time/space complexity independent of computer architecture Measure how slow an algorithm is, or how much memory it consumes, when input size (n) gets “large”

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

4 / 30

We would like to

Understand the growth rates of functions that measure run-time and memory usage of data structures and algorithms, wrt input size. f : N+ → R+ Examples: f (n) = 50n2 , g (n) = 31 log n

Measure time/space complexity independent of computer architecture Measure how slow an algorithm is, or how much memory it consumes, when input size (n) gets “large” Compare time/space complexity of two algorithms/data structures subject to the above conditions

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

4 / 30

Architecture independence

1 (1.6)n Algorithm 1 runs in time f1 (n) = 1000

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

5 / 30

Architecture independence

1 (1.6)n Algorithm 1 runs in time f1 (n) = 1000

Algorithm 2 runs in time f2 (n) = 10 · (1.6)n

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

5 / 30

Architecture independence

1 (1.6)n Algorithm 1 runs in time f1 (n) = 1000

Algorithm 2 runs in time f2 (n) = 10 · (1.6)n Considered to have the “same” run-time, asymptotically speaking

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

5 / 30

Architecture independence

1 (1.6)n Algorithm 1 runs in time f1 (n) = 1000

Algorithm 2 runs in time f2 (n) = 10 · (1.6)n Considered to have the “same” run-time, asymptotically speaking Algorithm 2 might run on a computer 10, 000 times faster, 10 · (1.6)n 1 = (1.6)n . 10, 0000 1000

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

5 / 30

Architecture independence

1 (1.6)n Algorithm 1 runs in time f1 (n) = 1000

Algorithm 2 runs in time f2 (n) = 10 · (1.6)n Considered to have the “same” run-time, asymptotically speaking Algorithm 2 might run on a computer 10, 000 times faster, 10 · (1.6)n 1 = (1.6)n . 10, 0000 1000

Rule #1 Ignore constant factors when comparing functions

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

5 / 30

Function behavior when input gets “large” When n large, 1 3 1000 n + 100log n

behaves exactly the same as 1 3 1000 n + 100log n 1 3 n→∞ 1000 n

lim

1 3 1000 n

1 3 1000 n ,

because

= 1.

dominates the other term

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

6 / 30

Function behavior when input gets “large” When n large, 1 3 1000 n + 100log n

behaves exactly the same as 1 3 1000 n + 100log n 1 3 n→∞ 1000 n

lim

1 3 1000 n dominates the other term 2n + n100 + 1010 log n behaves exactly

1 3 1000 n ,

because

= 1.

the same as 2n

2n + n100 + 1010 log n = 1. n→∞ 2n lim

2n dominates the other terms

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

6 / 30

Function behavior when input gets “large” When n large, 1 3 1000 n + 100log n

behaves exactly the same as 1 3 1000 n + 100log n 1 3 n→∞ 1000 n

lim

1 3 1000 n dominates the other term 2n + n100 + 1010 log n behaves exactly

1 3 1000 n ,

because

= 1.

the same as 2n

2n + n100 + 1010 log n = 1. n→∞ 2n lim

2n dominates the other terms

Rule #2 Keep only the dominating term ©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

6 / 30

Practical view of “dominating terms” Consider an Intel Core i7 Extreme Edition 980X (Hex core), 3.33GHz, top speed < 150 · 109 instructions per second (IPS). For simplicity, say it’s 109 IPS. lg lg n lg n n n2 n3 n5 2n 3n

10 1.7 ns 3.3 ns 10 ns 0.1 µs 1 µs 0.1 ms 1 µs 59 µs

20 2.17 ns 4.3 ns 20 ns 0.4 µs 8 µs 3.2 ms 1 ms 3.5 s

30 2.29 ns 4.9 ns 3 ns 0.9 µs 27 µs 24.3 ms 1s 57.2 h

40 2.4 ns 5.3 ns 4 ns 1.6 µs 64 µs 0.1 sec 18.3 m 386 y

50 2.49 ns 5.6 ns 5 ns 2.5 µs 125 µs 0.3 sec 312 h 227644 c

1000 3.3 ns 9.9 ns 1 µs 1 ms 1 sec 277 h 3.4 · 10282 Cent. 4.2 · 10458 Cent.

1.6100 ns is approx. 82 centuries (Recall fib1()). lg 1010 = 33, lg lg 1010 = 4.9 ©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

7 / 30

Some other large numbers

The age of the universe ≤ 13 G-Years = 13 · 107 centuries.

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

8 / 30

Some other large numbers

The age of the universe ≤ 13 G-Years = 13 · 107 centuries. ⇒ Number of seconds since big-bang ≈ 1018 .

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

8 / 30

Some other large numbers

The age of the universe ≤ 13 G-Years = 13 · 107 centuries. ⇒ Number of seconds since big-bang ≈ 1018 .

4 ∗ 1078 ≤ Number of atoms is the universe ≤ 6 ∗ 1079 .

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

8 / 30

Some other large numbers

The age of the universe ≤ 13 G-Years = 13 · 107 centuries. ⇒ Number of seconds since big-bang ≈ 1018 .

4 ∗ 1078 ≤ Number of atoms is the universe ≤ 6 ∗ 1079 . The probability that a monkey can compose Hamlet is ≈ 10160

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

8 / 30

Some other large numbers

The age of the universe ≤ 13 G-Years = 13 · 107 centuries. ⇒ Number of seconds since big-bang ≈ 1018 .

4 ∗ 1078 ≤ Number of atoms is the universe ≤ 6 ∗ 1079 . The probability that a monkey can compose Hamlet is ≈ 10160 so what’s the philosophical implication of this?

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

8 / 30

Some other large numbers

The age of the universe ≤ 13 G-Years = 13 · 107 centuries. ⇒ Number of seconds since big-bang ≈ 1018 .

4 ∗ 1078 ≤ Number of atoms is the universe ≤ 6 ∗ 1079 . The probability that a monkey can compose Hamlet is ≈ 10160 so what’s the philosophical implication of this?

Robert Wilensky, speech at a 1996 conference We’ve heard that a million monkeys at a million keyboards could produce the complete works of Shakespeare; now, thanks to the Internet, we know that is not true.

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

8 / 30

Talking about large numbers

Fun Puzzle What’s the largest number you can describe using twelve English words?

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

9 / 30

Talking about large numbers

Fun Puzzle What’s the largest number you can describe using twelve English words? How about: “Nine googol googol ... googol” googol is repeated 12 times. A googol = 10100 .

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

9 / 30

Mathematical view of “dominating term” f (n) dominates g (n) if f (n) = +∞. n→∞ g (n) lim

or equivalently lim

n→∞

©Hung Q. Ngo (SUNY at Buffalo)

g (n) = 0. f (n)

CSE 250 – Data Structures in C++

10 / 30

Mathematical view of “dominating term” f (n) dominates g (n) if f (n) = +∞. n→∞ g (n) lim

or equivalently lim

n→∞

g (n) = 0. f (n)

Example 1: order the following from most dominating down n2 , log log n, n log n, n1.2 , 3n

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

10 / 30

Mathematical view of “dominating term” f (n) dominates g (n) if f (n) = +∞. n→∞ g (n) lim

or equivalently lim

n→∞

g (n) = 0. f (n)

Example 1: order the following from most dominating down n2 , log log n, n log n, n1.2 , 3n Answer: 3n , n2 , n1.2 , n log n, log log n.

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

10 / 30

A reminder: L’Hôpital’s rule

f (n) f 0 (n) = lim 0 n→∞ g (n) n→∞ g (n) lim

if lim f (n) and

n→∞

lim g (n) are both 0 or both ± ∞

n→∞

Examples: lg n lim p n→∞ n (lg n)lg n lim p n→∞ n

©Hung Q. Ngo (SUNY at Buffalo)

= 0

(1)

= ?

(2)

CSE 250 – Data Structures in C++

11 / 30

Quick way to see what’s dominating what

In the following, we only consider a, b ≥ 0.

log n n exp(n)

Fact (log n)a ¿ (log n)b whenever a < b (log n)a ¿ nb for any a, b na ¿ nb whenever 0 ≤ a < b na ¿ bn whenever b > 1 an ¿ bn whenever 1 ≤ a < b

©Hung Q. Ngo (SUNY at Buffalo)

Example (log n)3.2 ¿ (log n)4.1 6 (log n)10 ¿ n0.01 n 0. 5 ¿ n 6 n10 ¿ (1.001)n 2n ¿ 3n

CSE 250 – Data Structures in C++

12 / 30

Slightly more difficult cases Which one grows faster (i.e. dominates)? n log n vs (log n)3 .

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

13 / 30

Slightly more difficult cases Which one grows faster (i.e. dominates)? n log n vs (log n)3 . Simple trick: lim

n→∞

©Hung Q. Ngo (SUNY at Buffalo)

n log n 3

log n

= lim

n

n→∞ log2 n

= +∞.

CSE 250 – Data Structures in C++

13 / 30

Slightly more difficult cases Which one grows faster (i.e. dominates)? n log n vs (log n)3 . Simple trick: lim

n log n

n→∞

3

log n

= lim

n

n→∞ log2 n

= +∞.

Which one grows faster? n2 30

log n

©Hung Q. Ngo (SUNY at Buffalo)

p

vs n n.

CSE 250 – Data Structures in C++

13 / 30

Slightly more difficult cases Which one grows faster (i.e. dominates)? n log n vs (log n)3 . Simple trick: lim

n log n 3

n→∞

log n

= lim

n

n→∞ log2 n

= +∞.

Which one grows faster? n2 30

p

log n

vs n n.

Simple trick: lim

n→∞

©Hung Q. Ngo (SUNY at Buffalo)

n2 log30 n

p

n n

= lim

n 0. 5

n→∞ log30 n

CSE 250 – Data Structures in C++

= +∞.

13 / 30

A small summary in comparing growth rates Mathematically, f (n) À g (n) for “sufficiently large” n means f (n) = ∞. n→∞ g (n) lim

We also say f (n) is asymptotically larger than g (n).

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

14 / 30

A small summary in comparing growth rates Mathematically, f (n) À g (n) for “sufficiently large” n means f (n) = ∞. n→∞ g (n) lim

We also say f (n) is asymptotically larger than g (n). They are comparable (or of the same order) if lim

f (n)

n→∞ g (n)

©Hung Q. Ngo (SUNY at Buffalo)

=c >0

CSE 250 – Data Structures in C++

14 / 30

A small summary in comparing growth rates Mathematically, f (n) À g (n) for “sufficiently large” n means f (n) = ∞. n→∞ g (n) lim

We also say f (n) is asymptotically larger than g (n). They are comparable (or of the same order) if lim

f (n)

n→∞ g (n)

=c >0

and f (n) is asymptotically smaller than g (n) when f (n) = 0. n→∞ g (n) lim

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

14 / 30

A small summary in comparing growth rates Mathematically, f (n) À g (n) for “sufficiently large” n means f (n) = ∞. n→∞ g (n) lim

We also say f (n) is asymptotically larger than g (n). They are comparable (or of the same order) if lim

f (n)

n→∞ g (n)

=c >0

and f (n) is asymptotically smaller than g (n) when f (n) = 0. n→∞ g (n) lim

Question Are there f (n) and g (n) not falling into one of the above cases?

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

14 / 30

A small summary in comparing growth rates Mathematically, f (n) À g (n) for “sufficiently large” n means f (n) = ∞. n→∞ g (n) lim

We also say f (n) is asymptotically larger than g (n). They are comparable (or of the same order) if lim

f (n)

n→∞ g (n)

=c >0

and f (n) is asymptotically smaller than g (n) when f (n) = 0. n→∞ g (n) lim

Question Are there f (n) and g (n) not falling into one of the above cases? π

n1+sin(n 2 ) vs n. ©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

14 / 30

Some examples

a(n) =

p

n

5

b(n) = n + 107 n c(n) = (1.3)n d (n) = (lg n)100 3n e(n) = 106 f (n) = 3180 g (n) = n0.0000001 h(n) = (lg n)lg n

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

15 / 30

Outline

1

Understanding growth of functions

2

Asymptotic notations

3

“Advanced” stuff

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

16 / 30

Motivations for O , Ω, Θ

Sometimes we don’t have a precise formula for the runtime

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

17 / 30

Motivations for O , Ω, Θ

Sometimes we don’t have a precise formula for the runtime For one thing, the constants are not known, machine dependent

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

17 / 30

Motivations for O , Ω, Θ

Sometimes we don’t have a precise formula for the runtime For one thing, the constants are not known, machine dependent But we do have a rough upperbound or lowerbound

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

17 / 30

Motivations for O , Ω, Θ

Sometimes we don’t have a precise formula for the runtime For one thing, the constants are not known, machine dependent But we do have a rough upperbound or lowerbound

Upperbound: algorithm A runs in at most n3 time for n large

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

17 / 30

Motivations for O , Ω, Θ

Sometimes we don’t have a precise formula for the runtime For one thing, the constants are not known, machine dependent But we do have a rough upperbound or lowerbound

Upperbound: algorithm A runs in at most n3 time for n large Mathematically: Algorithm A runs in time O(n3 )

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

17 / 30

Motivations for O , Ω, Θ

Sometimes we don’t have a precise formula for the runtime For one thing, the constants are not known, machine dependent But we do have a rough upperbound or lowerbound

Upperbound: algorithm A runs in at most n3 time for n large Mathematically: Algorithm A runs in time O(n3 )

Lowerbound: algorithm A runs in at least n2 time

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

17 / 30

Motivations for O , Ω, Θ

Sometimes we don’t have a precise formula for the runtime For one thing, the constants are not known, machine dependent But we do have a rough upperbound or lowerbound

Upperbound: algorithm A runs in at most n3 time for n large Mathematically: Algorithm A runs in time O(n3 )

Lowerbound: algorithm A runs in at least n2 time Mathematically: Algorithm A runs in time Ω(n3 )

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

17 / 30

Motivations for O , Ω, Θ

Sometimes we don’t have a precise formula for the runtime For one thing, the constants are not known, machine dependent But we do have a rough upperbound or lowerbound

Upperbound: algorithm A runs in at most n3 time for n large Mathematically: Algorithm A runs in time O(n3 )

Lowerbound: algorithm A runs in at least n2 time Mathematically: Algorithm A runs in time Ω(n3 )

Lowerbound: algorithm A runs in exactly n2 log n time

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

17 / 30

Motivations for O , Ω, Θ

Sometimes we don’t have a precise formula for the runtime For one thing, the constants are not known, machine dependent But we do have a rough upperbound or lowerbound

Upperbound: algorithm A runs in at most n3 time for n large Mathematically: Algorithm A runs in time O(n3 )

Lowerbound: algorithm A runs in at least n2 time Mathematically: Algorithm A runs in time Ω(n3 )

Lowerbound: algorithm A runs in exactly n2 log n time Mathematically: Algorithm A runs in time Θ(n2 log n)

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

17 / 30

Laundry list of definitions

f (n) = O(g (n)) iff ∃c > 0, n0 > 0 : f (n) ≤ cg (n), for n ≥ n0 f (n) = Ω(g (n)) iff ∃c > 0, n0 > 0 : f (n) ≥ cg (n), for n ≥ n0 f (n) = Θ(g (n)) iff f (n) = O(g (n)) & f (n) = Ω(g (n)) f (n) = o(g (n)) iff ∀c > 0, ∃n0 > 0 : f (n) ≤ cg (n), for n ≥ n0 f (n) = ω(g (n)) iff ∀c > 0, ∃n0 > 0 : f (n) ≥ cg (n), for n ≥ n0

Note: only functions f of the form f : N+ → R+ f (n) = x(g (n)) doesn’t really make sense, f (n) ∈ x(g (n)) is, but not commonly used.

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

18 / 30

Big-O Definition (Big-O) f , g : N+ → R+ , then we write f (n) = O(g (n)) iff there are constants n0 , C > 0 such that f (n) ≤ C · g (n) whenever n ≥ n0

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

19 / 30

Big-O in plain English f (n) = O(g (n)) means

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

20 / 30

Big-O in plain English f (n) = O(g (n)) means For n “sufficiently large,” f (n) is bounded above by a constant scaling of g (n)

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

20 / 30

Big-O in plain English f (n) = O(g (n)) means For n “sufficiently large,” f (n) is bounded above by a constant scaling of g (n)

An algorithm that runs in time f (n) is not worse than an algorithm that runs in time g (n), asymptotically speaking

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

20 / 30

Big-O in plain English f (n) = O(g (n)) means For n “sufficiently large,” f (n) is bounded above by a constant scaling of g (n)

An algorithm that runs in time f (n) is not worse than an algorithm that runs in time g (n), asymptotically speaking If f (n) is dominated by g (n), or if f (n) ≈ g (n), then f (n) = O(g (n))

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

20 / 30

Big-O in plain English f (n) = O(g (n)) means For n “sufficiently large,” f (n) is bounded above by a constant scaling of g (n)

An algorithm that runs in time f (n) is not worse than an algorithm that runs in time g (n), asymptotically speaking If f (n) is dominated by g (n), or if f (n) ≈ g (n), then f (n) = O(g (n)) Examples: 1000n3 + 30n + log n = O(n3 ) 1000n3 + 30n + log n = O(n4 ) 1000n3 + 30n + log n = O(n3.1 + n2 )

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

20 / 30

Big-Ω Definition (Big-Ω) f , g : N+ → R+ , then we write f (n) = Ω(g (n)) iff there are constants n0 , C > 0 such that f (n) ≥ C · g (n) whenever n ≥ n0

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

21 / 30

Big-Ω in plain English f (n) = Ω(g (n)) means

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

22 / 30

Big-Ω in plain English f (n) = Ω(g (n)) means For n “sufficiently large,” f (n) is bounded below by a constant scaling of g (n)

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

22 / 30

Big-Ω in plain English f (n) = Ω(g (n)) means For n “sufficiently large,” f (n) is bounded below by a constant scaling of g (n)

An algorithm that runs in time f (n) is not better than an algorithm that runs in time g (n), asymptotically speaking

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

22 / 30

Big-Ω in plain English f (n) = Ω(g (n)) means For n “sufficiently large,” f (n) is bounded below by a constant scaling of g (n)

An algorithm that runs in time f (n) is not better than an algorithm that runs in time g (n), asymptotically speaking If f (n) dominates g (n), or if f (n) ≈ g (n), then f (n) = Ω(g (n))

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

22 / 30

Big-Ω in plain English f (n) = Ω(g (n)) means For n “sufficiently large,” f (n) is bounded below by a constant scaling of g (n)

An algorithm that runs in time f (n) is not better than an algorithm that runs in time g (n), asymptotically speaking If f (n) dominates g (n), or if f (n) ≈ g (n), then f (n) = Ω(g (n)) Examples: 1000n3 + 30n + log n = Ω(n3 ) 1000n3 + 30n + log n = Ω(n2 ) 1000n3 + 30n + log n = Ω(n2.9 + n2 )

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

22 / 30

Θ Definition (Θ) f , g : N+ → R+ , then we write f (n) = Θ(g (n)) iff there are constants n0 , C1 , C2 > 0 such that C1 g (n) ≤ f (n) ≤ C2 · g (n) whenever n ≥ n0

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

23 / 30

Θ in plain English

f (n) = Θ(g (n)) means

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

24 / 30

Θ in plain English

f (n) = Θ(g (n)) means For n “sufficiently large,” f (n) is sandwiched between two constant scalings of g (n)

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

24 / 30

Θ in plain English

f (n) = Θ(g (n)) means For n “sufficiently large,” f (n) is sandwiched between two constant scalings of g (n)

An algorithm that runs in time f (n) is about the same as an algorithm that runs in time g (n), asymptotically speaking

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

24 / 30

Θ in plain English

f (n) = Θ(g (n)) means For n “sufficiently large,” f (n) is sandwiched between two constant scalings of g (n)

An algorithm that runs in time f (n) is about the same as an algorithm that runs in time g (n), asymptotically speaking Examples 1000n3 + 30n + log n = Θ(n3 ) 3n + 2n n1000 = Θ(3n ).

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

24 / 30

Outline

1

Understanding growth of functions

2

Asymptotic notations

3

“Advanced” stuff

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

25 / 30

Laundry list of properties

f (n) = o(g (n)) ⇒ f (n) = O(g (n)) & f (n) 6= Θ(g (n))

(3)

f (n) = ω(g (n)) ⇒ f (n) = Ω(g (n)) & f (n) 6= Θ(g (n))

(4)

f (n) = O(g (n)) ⇔ g (n) = Ω(f (n))

(5)

f (n) = Θ(g (n)) f (n) lim = +∞ n→∞ g (n) f (n) lim =c >0 n→∞ g (n) f (n) lim =0 n→∞ g (n)

⇔ g (n) = Θ(f (n))

(6)

⇔ f (n) = ω(g (n))

(7)

⇒ f (n) = Θ(g (n))

(8)

⇔ f (n) = o(g (n))

(9)

Remember: we only consider functions from N+ → R+ . ©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

26 / 30

Stirling’s approximation

For all n ≥ 1,

p

n! = 2πn where

It then follows that

³ n ´n

e

e αn ,

1 1 < αn < . 12n + 1 12n p

n! = 2πn

³ n ´n µ

e

1+Θ

1 n

µ ¶¶

.

The last formula is often referred to as the Stirling’s approximation

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

27 / 30

More (tricky) examples

a(n) = blg nc! b(n) = n5 + 107 n p

c(n) = 2

lg n

d (n) = (lg n)100 e(n) = 3n f (n) = (lg n)lg lg n g (n) = 2n

0.001

h(n) = (lg n)lg n i(n) = n!

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

28 / 30

Special functions lg∗ n = min{i ≥ 0 : lg(i) n ≤ 1}, where for any function f : N+ → R+ , (

f (i) (n) =

n if i = 0 (i −1) f (f (n)) if i > 0

Intuitively, compare lg∗ n vs lg n lg∗ n vs (lg n)² , ² > 0 2n vs nn lg∗ (lg n) vs lg(lg∗ n) How about rigorously? ©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

29 / 30

Asymptotic notations in equations

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

30 / 30

Asymptotic notations in equations

5n3 + 6n2 + 3 = 5n3 + Θ(n2 ) means “the LHS is equal to 5n3 plus some function which is Θ(n2 ).”

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

30 / 30

Asymptotic notations in equations

5n3 + 6n2 + 3 = 5n3 + Θ(n2 ) means “the LHS is equal to 5n3 plus some function which is Θ(n2 ).” o(n6 ) + O(n5 ) = o(n6 ) means “for any f (n) = o(n6 ), g (n) = O(n5 ), the function h(n) = f (n) + g (n) is equal to some function which is o(n6 ).”

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

30 / 30

Asymptotic notations in equations

5n3 + 6n2 + 3 = 5n3 + Θ(n2 ) means “the LHS is equal to 5n3 plus some function which is Θ(n2 ).” o(n6 ) + O(n5 ) = o(n6 ) means “for any f (n) = o(n6 ), g (n) = O(n5 ), the function h(n) = f (n) + g (n) is equal to some function which is o(n6 ).”

Be very careful!! ?

O(n5 ) + Ω(n2 ) = Ω(n2 ) ?

O(n5 ) + Ω(n2 ) = O(n5 )

©Hung Q. Ngo (SUNY at Buffalo)

CSE 250 – Data Structures in C++

30 / 30