DSA — chapter 1 — Introduction

Khera Shanu
13 min readFeb 7, 2021

--

Problem: given 2 algorithms to solve the same problem, how to determine which algorithm is right?

Solution 1: Implement both algorithms and test against some predefined test cases and see which one takes fewer resources like time.

problems with solution 1:

  • what if the test cases end up favoring one algorithm over another, for example, let's say the problem to solve by the algorithm is to implement linear search — one implementation of the algorithm is to find the desired data in an array from the beginning while in the other implementation data is searched from the end to beginning of the array, so one algorithm searches from left to right in the array while other searches from right to left in the array, what if the test cases are written in such a way that they are closer to the right and thus algorithm searching from right to left is considered better while in theory there is no difference between the two.
  • Another problem with this approach is that it depends on server load, so when algorithm 1 ran, the server/system was less busy while in the case of algorithm 2 server was quite busy and thus took more time.

Solution 2: Asymptotic Analysis

  • Theoretical approach
  • Measure the order of growth in terms of input size.
  • Guarantees few things for input size greater than a threshold.

It guarantees that if an algorithm is asymptotically better than some other algorithm, after a certain value of input size, the asymptotically better algorithm will perform better.

For example, if an algorithm is linear and another algorithm is logarithmic, it is guaranteed that if the linear algorithm runs on the world’s fastest computer while the logarithmic algorithm runs on an average computer after a particular size of the input, the logarithmic algorithm will be faster.

in red: t= 10⁹ log(x) and in blue: t= x

I took 10⁹ as a multiple because the computer where the logarithmic algorithm is running is assumed slow, as you can see in this case our threshold is, x₀ = 10¹⁰ and after this point, the slower computer starts beating the fastest computer. (it’s true because the order of growth of 10⁹log(x) is less than x.)

In the above code example:

  • The input size is x, also let’s assume running a loop and print statement takes some constant time C1 :
    so time taken for this code is ≈ C1 * x + 1000 * C1C1*x + C2
  • which is obviously linear ( y = m*x + c ).

Asymptotic notation

  • Ѳ (Theta notation)
  • O (Big O notation)
  • Ω (Omega notation)

Ѳ (Theta notation)
Ѳ notation is used to tell about the highest growing term, for example:
Ѳ(n²) = n² + 2n + 1 ( highest order growing term is n² )

O (Big O notation)
Big O notation represents the highest growing term is at most something, for example: (note that O(n²) will cover all the functions in Ѳ(n²))
O(n²) = n² + 2n + 1
O(n²) = 3n
O(n²) = 1000
O(n²) ≠ n³

Ω (Omega notation)
Ω notation represents all the functions with at least some order of growth, for example:
Ω(n²) = n² + 2n + 1
Ω(n²) = 2n³
Ω(n²) ≠ 3n + 1
like O notation Ω(n²) will also cover all functions covered by Ѳ(n²).

I will come to this topic again with a more mathematical explanation and some graphs to make this clear.

Best case, Average case, and the Worst case of an algorithm.

  • Best case scenario: what you are finding is present at the 0th index.
  • Worst case scenario: anything that’s not present — basically iterating over the whole length
  • Average case scenario:
    this is not that simple, the average case is the sum of all possible times divided by total possibilities of input.
    - calculating average needs assumption:
    for example, one can assume that all possible permutation of iterable is equally likely or some other statistical distribution.
    basically, it’s tough to calculate in most cases.

It should be obvious that the best case is not very useful and the average case is tough to calculate, so the worst-case analysis is the one that we will be interested in most of the time.

Connecting Best case, Worst case, and Average case analysis with Asymptotic notations.

In the above algorithm, time complexity (T):

T ≠Ѳ(n) because in some cases it’s O(1) (best case scenario for example).
but if we say the worst-case time complexity of this algorithm is Ѳ(n), then that’s true.
Also, note that the time complexity of the above algorithm is O(n). (make sure you understand the difference between time complexity and worst-case time complexity)

Exercises (find the order of growth of following loops):

Solution:
In both the problems above it’s quite clear that they both are kind of the same, one goes from 0 towards n, while the other goes from n towards 0, and the “step” size is c.
It will take n/c steps in whatever direction.
n/c => Ѳ(n), that is the time complexity of the above loops in Ѳ(n).

Exercises (find the order of growth of following loops):

Solution:
One thing is clear in the above loops, that they are again kind of the same, and now a movement towards the destination (from 1 towards n or from n towards 0) has sped up, the movement is not linear anymore.

Let’s go back to the previous problem and understand how we got from 0 towards n by adding c again and again:
c + c + c + … s times = n (might be more than n, but (s-1)*c is definitely less than n)
c*s = n => s = n/c

Similarly, in this example we can say something like this:
c*c*c … s times = n (from 1 towards n)
c^s = n => s = log n ( log with base c)

The time complexity of the above loops is Ѳ(log n).

In the above loop, let’s see how the value of the index changes with each iteration:
index => 2, 2^c, (2^c)^c, ((2^c)^c)^c …
=> index => 2, 2^c, 2^c², 2^c³, …
after kth iteration => 2^(c^k) ≤ n
k ≤ log log (n) (the first log is the log to base c, second is the log to the base 2)

What is the time complexity of the below algorithm:

The above algorithm has time complexity is O(n), and not Ѳ(n) because sometimes (when n is multiple of 10) it’s Ѳ(1).

What is the time complexity of the below algorithm:

C1*n + C2*m = Ѳ(n+m)

What is the time complexity of the below algorithm:

let’s see how many times the inner loop will run: n, n-1, n-2 … 1
well that’s obviously the sum of first n natural numbers: n(n+1)/2 = Ѳ(n²)

Limitations of Asymptotic analysis

  • Lets we have an algorithm O(n²) and other O(n), but what if the constants in O(n²) are really low compared to O(n), although still after a certain value of n, O(n) will always beat O(n²) but what if that certain value of n is not practical.

blue curve: y = x²/10⁵ and black line is y = x, the linear curve starts performing better after x = 100,000. what if 100,000 is not even practical for our real-life scenario!

  • Other things that asymptotic notation doesn’t consider a lot of computer-related things, for example, quicksort has Ѳ(n²) as worst-case time complexity while merge sort has Ѳ(n*log n) as worst-case time complexity but still, quicksort is better because it’s more architecture and cache-friendly algorithm.

Analysis of Recursive Algorithms

the condition above will take some constant C1 time and the loop will take C2*n time, if we say the algorithm takes T(n) time we can write something like below:

T(n) = C1 + C2*n + 2*T(n/2) = Ѳ(n) + 2T(n/2)

Before computing T(n) let’s look at more examples.

Time complexity of example 1 will be T(n) = C + 2T(n/2)
Time complexity of example 2 will be T(n) = C + T(n-1)

Now we know how to write the recursive equation of T(n), let’s see how we can solve T(n) and get a value in terms of n.

There are multiple mathematical methods to solve recurrences like these:
T(n) = 2T(n/2) + Cn
such as the substitution method, master method, recursion tree method, etc.
for now, we will only discuss the recursion tree method.

Recursion tree method.

  • We write the non-recursive part as the root of the tree and the recursive part as the children.
  • we keep expanding children till we see a pattern.

TODO — add a picture here ask Mohit!
at every level work done is Cn, if the height of the tree is h, the total work done is C*n*h.
the input size id decreasing at each level like this: n, n/2, n/4 … 1
at height h, input size is n/2^h = 1
2^h = n
h = log n(base 2)

So the total time taken by the algorithm is C*n*log n => Ѳ(n log n)

Let’s analyze the below recursion:

  • T(n) = 2T(n-1) + C
    T(1) = C

TODO — add a picture here ask Mohit!

The time spent by this algorithm at each level of the tree has this sequence: C, 2C, 4C, …
which is a GP, with first term, a = 1 and common ratio, r = 2
the height of this tree is simple to calculate as n is being reduced 1 value at a time, so height is n.
so the sum of time spent at all levels = C * 1(2^n -1) => Ѳ(2^n)
so the time complexity is exponential, which is like super slow!

  • T(n) = T(n/2) + C
    T(1) = C

TODO — add a picture here ask Mohit!

the series here is C + C + C + … log n times, so this is simply Ѳ(log n)

  • T(n) = 2T(n/2) + C
    T(1) = C

TODO — add a picture here ask Mohit!

the series here will be: C + 2C + 4C + … log n times.
so the sum of time spent at all levels = C * 1(2^log n -1) => Ѳ(n)

  • T(n) = T(n/4) + T(n/2) + Cn
    T(1) = C

TODO — add a picture here ask Mohit!

Cn + 3/4 Cn + 9/16 Cn + …
again GP, a =1, r = 3/4.
this recursion is not that simple, it looks like a GP now but some T(n/4) will reach T(1) faster than T(n/2), so a number of nodes will differ at each level and will lose this GP relation.

but we can assume this tree as a full tree and let’s say this remains a GP, but now Ѳ notation is not possible, because now we are talking about an upper bound, so big O notation will be used.

since r < 1, for a large value of n,1 -r^n ≈ 0
=> O(Cn * i/(1–3/4)) => O(n)

  • T(n) = T(n-1) + T(n-2) + C
    T(1) = C
    Recursion equation for Fibonacci numbers.

TODO — add a picture here ask Mohit!

Again here, this tree won’t be a full tree, so the only O will be easier to compute.
C ( 1 + 2 + 4 + … n terms) = O(2^n)

Mathematical definitions of Asymptotic notations:

  • Theta Notation
Θ(g(n)) = {f(n): there exist positive constants c1, c2 and n0 such 

that 0 <= c1*g(n) <= f(n) <= c2*g(n) for all n >= n0}
An Example

here f(n) is the green curve: x² + 3x + 10 and I have used c1 as 0.5 and c2 as 3.

  • Big O notation:
O(g(n)) = { f(n): there exist positive constants c and 

n0 such that 0 <= f(n) <= c*g(n) for

all n >= n0}

here I have used f(n) = 1.5x + 50 and c is 2x

Omega notation

Ω (g(n)) = {f(n): there exist positive constants c and

n0 such that 0 <= c*g(n) <= f(n) for

all n >= n0}.

Since omega notation is not very useful, i wouldn’t waste my time on this!

Exercises and Quiz:

what is the time complexity of fun?int fun(int n)
{
int count = 0;
for (int i = 0; i < n; i++)
for (int j = i; j > 0; j--)
count = count + 1;
return count;
}

Solution:
j will take following values: 0, 1, … n-1
total number of times loop will run: ∑n = n(n+1)/2 = Ѳ(n²)

Let w(n) and A(n) denote respectively, the worst case and average case running time of an algorithm executed on an input of size n. which of the following is ALWAYS TRUE ?
(A) A(n) = Ω(W(n))
(B) A(n) = Θ(W(n))
(C) A(n) = O(W(n))
(D) A(n) = o(W(n))

Solution:
let’s understand this one option at a time:
option A, says Average time will be at least be the worst time, that’s just plain stupid.

option B, Average time will be theta of worst time, hmm … what does that even mean: it says that Average time and worst time will always have the same highest-order term. it’s not necessary for example quicksort has an average time complexity of O(n*log n) while worst-case time complexity is O(n²).
but it’s not necessary that one remembers the time complexity of algorithms, can we think of some kind of mathematic scenario to prove this wrong (or right ?), let’s try to get average as C1*n and worst as C2*n², a program which loops from 0 till n but only for 10 it has inner loop too from 0 to 10, can be one option … and thus option B is not always true.

option C, Is true because the Average case will have the highest order term smaller or the same as the worst case, some constant can be used to make it agree to the definition of Big-Oh. for example:
A = n, W = n² or A=n², W=n²

option D, Is not true because C is true but as an independent option if we see this we need to understand small o notation which is not used a lot though!
see this to understand the difference between the two, this is not true for a case where both A and W have the same highest-order term.

Which of the following is not O(n^2)?
- (15^10) * n + 12099
- n^1.98
- n^3 / (sqrt(n))
- (2^20) * n
  • 15¹⁰n + 12099 is C1*n + C2 , this is O(n), everything O(n) is O(n²).
  • n ^ 1.98 is O(n ^ 1.98) and since 1.98 < 2, this is also O(n²)
  • n³/√2 = n^ 2.5 , 2.5 > 2 , this is O(n ^ 2.5) but not O(n²) — answer
  • 2²⁰n is C1*n which is O(n) and also O(n^x) where x ≥ 1
Which of the given options provides the increasing order of asymptotic complexity of functions f1, f2, f3 and f4?  f1(n) = 2^n
f2(n) = n^(3/2)
f3(n) = nLogn
f4(n) = n^(Logn)
- f3, f2, f4, f1
- f3, f2, f1, f4
- f2, f3, f1, f4
- f2, f3, f4, f1

No doubt here the least time will be n*log(n) and now between n ^ 3/2 and n ^ log(n), n ^ log (n) > n ^ 3/2 since log(n) will be increasing with n and 3/2 won’t and log n will be higher than 3/2 as n increases.
now the last one is n^log(n) and 2^n, this one is not that easy in one case you can just compare powers and 2^n is definitely a winner, but 2 is always 2, but n changes as n changes, so in a way 2^p and n^p, obviously n^p is the winner but here it’s more like 2^p1 and n^p2 where p1 >> p2, how do we know for sure which will be the winner.
The only way I can think of solving this one right now is by taking examples such as n = 10⁵, 10¹⁰, etc. and it looks like 2^n is indeed the clear winner.

so the order is 2^n, n^log n, n^(3/2), n*log n => f1, f4, f2, f3 so the answer is option 1.

Following image shows curve for both functions 2^x (blue) and x^log(x) (red):

int unknown(int n) {
int i, j, k = 0;
for (i = n/2; i <= n; i++)
for (j = 2; j <= n; j = j * 2)
k = k + n/2;
return k;
}
What is the returned value of the above function?
(A) Θ(n^2)
(B) Θ(n^2 * Logn)
(C) Θ(n^3)
(D) Θ(n^3 * Logn)

let’s analyze the code:
the outermost loop runs n/2 times, the innermost loop runs log n times.
so the statement k += n/2 runs n/2 * log n times.
values of k = n/2 * (n/2 *log n) = (n²logn)/4 = Θ(n² * log n) — option B

The recurrence equation
T(1) = 1
T(n) = 2T(n - 1) + n, n ≥ 2
evaluates to:- 2^(n+1)- n - 2
- 2^n - n
- 2^(n+1) -2n - 2
- 2^n + n

Solution: (I copy-pasted this one :D )

2*T(n) =     2n + 4(n-1) + 8(n-2) + 16(n-3) + 2^n   T(n) = n + 2(n-1) + 4(n-2) + 8(n-3) + 2^(n-1) * 12T(n) - T(n) =  -n + 2 + 4 + 8 + ..... 2^n
T(n) = -n + 2^(n+1) - 2 [Applying GP sum formula for 2, 4, ...]
= 2^(n+1) - 2 - n (first option)
Consider the following three claims1. (n + k)^m = Θ(n^m), where k and m are constants
2. 2^(n + 1) = O(2^n)
3. 2^(2n + 1) = O(2^n)
Which of these claims are correct ?- 1 and 2
- 1 and 3
- 2 and 3
- 1, 2 and 3

Solution:- this one is easy.
let’s go option by option, in option 1, k and m are constants, if we use the binomial theorem, the highest order term will be n^m that is Θ(n^m).

Consider the following C code segmentint f (int x)
{
if (x < 1) return 1;
else return (f(x-1) + g(x))
}

int g (int x)
{
if (x < 2) return 2;
else return (f(x-1) + g(x/2));
}
Of the following, which best describes the growth of f(x) as a function of x?- Linear
- Exponential
- Quadratic
- Cubic

Solution: f(n) creates 2 * f(n-1) and other trees, f(n) > 2*f(n-1)
f(n) > 2*2*f(n-2) … f(n) > 2^n f(1) … f(n) > Θ(2^n) … thus Exponential.

void fun()
{
int i, j;
for (i=1; i<=n; i++)
for (j=1; j<=log(i); j++)
printf("kherashanu.com");
}
- Θ(n)
- Θ(n log n)
- Θ(n^2)
- Θ(n^2(Logn))

this is log(1) + log(2) + … log(n) = log(n!) ≈ Θ(n log(n))

See this: https://www.youtube.com/watch?v=lU_QT5GSuxI

--

--

Khera Shanu
Khera Shanu

No responses yet