Big-O Notation (Time Complexity)

 
Concept:
- Computer scienties have borrowed the concept from mathematics.
- Big-O is focusing on the number of steps, but in a specific way.

Key idea:
- We focus on how fast steps grow, not exact counts
- Constants and small differences are ignored

Big-O answers the question:
- What happens to performance when the input becomes VERY large?

Linear Search - O(n)

 
""" LINEAR SEARCH - O(n)
------------------------
Check each element one by one until the target is found.

It's like looking for a name in an unsorted contact list
by scanning from top to bottom.

Time Complexity O(n):
O(1) - searched item is first (best case)
O(n) - searched item is last or missing (worst case)
----------------------------------------------------
"""

contacts = ['Alice', 'Bob', 'Carol', 'Dave', 'Eve']

def find_contact(contacts, name):
    steps = 0

    for i in range(len(contacts)):
        steps += 1

        if name == contacts[i]:  # found at index i
            return i, steps      # number of checkes: steps
            
    return -1, steps  # not found

index, steps = find_contact(contacts, 'Dave')

print(f"Found at index: {index} steps: {steps}")

# --------------------------
# Found at index: 3 steps: 4

Binary Search - O(log n)

 
""" BINARY SEARCH - SORTED LISTS -O(log N)
------------------------------------------
Binary search is effective only on SORTED lists.

Concept:
 - Reapeatedly split the search in half 
 - Discard the half that cannot contain the target

Why O(log n)?
 - Each step cuts the search space in half
 - Doubling the data only adds ONE extra step

Compare the middle element to the target:
  - If equal, we're done
  - If target is greater, search the right half
  - If smaller, search the left half

Analogy:
  - Finding a word in a dictionary by jumping to the middle, 
  - Narrowing the seaction each time

Requirement:
  The array MUST be sorted.

When to use:
  - You need fast lookups (much faster than linear search for large N).

Time complexity:
  O(log N) comparisons in the worst case.
----------------------------------------
"""

data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

def binary_search(items, val):
    left = 0
    right = len(items) - 1
    steps = 0

    while left <= right:
        steps += 1
        middle = (left + right) // 2

        if val == items[middle]:
            return middle, steps  # found

        if val > items[middle]:
            left = middle + 1
        else:
            right = middle -1

    return -1, steps  # not found


# 1) Small ordered list
# ---------------------
key, steps = binary_search(data, 7)
print(f"found {key} in {steps} steps")  # found 6 in 4 steps


# 2) Doubling the list, the search increase with only 1 step
# ----------------------------------------------------------
data = [i for i in range(21)]
key, steps = binary_search(data, 17)
print(f"found {key} in {steps} steps")  # found 17 in 5 steps


# 3) For large sized list, the search is very fast
# ------------------------------------------------
data = list(range(100000))
key, steps = binary_search(data, 70000)
print(f"found {key} in {steps} steps")  # found 70000 in 17 steps

Bubble Sort - O(n^2)

 
""" BUBBLE SORT - O(n^2)
--------------------
Swap adjiacent elements (repeatedly) if they are not sorted.

Why O(n^2)?
- One loop scans the list
- Another loop repeats the scan many times
-> Nested loops = quadratic growth

Analogy:
Sorting cards by repeatedly comparing neighbors,
over and over, until everything is in order.

Time Complexity - O(n^2)
 - Base case:
   O(n) - already sorted, with optimization (keep track of right)
   
 - Worst case:
   O(n^2) - reverse order 
------------------------------------
"""

def bubble_sort(items):
    sorted = False
    steps = 0

    while not sorted:
        sorted = True

        for i in range(len(items)-1):
            steps += 1

            if items[i] > items[i+1]:
                items[i], items[i+1] = items[i+1], items[i] # swap 
                sorted = False
                steps +=1

    return items, steps


data = [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]
data, steps = bubble_sort(data)
print(data, steps)  
# -----------------------------------
# [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] 135


# Bubble sort (optimized)
# -----------------------
def bubble_sort_optimized(items):
    sorted = False
    steps = 0
    right = len(items) - 1  # Look Here

    while not sorted:
        sorted = True
        for i in range(right):
            steps += 1
            if items[i] > items[i + 1]:
                items[i], items[i + 1] = items[i + 1], items[i] # swap 
                steps +=1
                sorted = False

        right -= 1  # Look Here

    return items, steps

data = [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]
data, steps = bubble_sort_optimized(data)
print(data, steps)  
# ----------------------------------
# [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] 90


# Side‑by‑side comparison, explaining why O(n^2)
# ---------------------------------------------
print("n BS-steps n^2")

for n in [5, 10, 20, 40, 80]:
    arr = list(range(n, 0, -1))

    _, steps = bubble_sort_optimized(arr)
    print(n, steps, n**2)

# --------------
# n BS-steps n^2
# 5 20 25
# 10 90 100
# 20 380 400
# 40 1560 1600
# 80 6320 6400




References: