Unlock hundreds more features
Save your Quiz to the Dashboard
View and Export Results
Use AI to Create Quizzes and Analyse Results

Sign inSign in with Facebook
Sign inSign in with Google

Ready to Master Big O Notation? Take the Quiz Now!

Dive into big o practice problems and boost your algorithm analysis.

Difficulty: Moderate
2-5mins
Learning OutcomesCheat Sheet
Paper art illustration of Big O notation practice quiz with algorithm icons on dark blue background

This Big O notation practice quiz helps you analyze algorithm runtime, pick the right complexity class, and compare best, average, and worst cases. Work through short items to spot gaps before an exam or interview, then try the quick algorithm quiz or the in-depth algorithm test for more practice.

What is the time complexity of accessing an element in an array by index?
O(log n)
O(n log n)
O(n)
O(1)
Accessing an element by index in a contiguous array is a constant time operation because the memory address is directly computed from the base address plus offset. No iteration or additional computation proportionate to the array size is required. This means the operation runs in constant time, O(1). .
What is the worst-case time complexity of linear search in an unsorted array?
O(log n)
O(n^2)
O(n)
O(1)
Linear search examines each element in sequence until the target is found or the end is reached. In the worst case, it must check every element, resulting in O(n) operations. This makes its worst-case time complexity linear. .
What is the time complexity of inserting a new node at the beginning of a singly linked list?
O(n^2)
O(n)
O(log n)
O(1)
Inserting at the head of a singly linked list involves creating a new node and adjusting two pointers. These operations do not depend on the list size, so the insertion runs in constant time, O(1). .
What is the time complexity of removing the last element from a singly linked list when only a head pointer is available?
O(n^2)
O(n)
O(1)
O(log n)
Removing the last element of a singly linked list requires traversing from the head to the node before the last. This traversal takes time proportional to the number of nodes, yielding O(n). .
What is the time complexity of push and pop operations on a stack implemented using an array?
O(1)
O(log n)
O(n)
O(n^2)
Both push and pop involve updating the index pointer and possibly writing to or reading from an array slot. These operations do not depend on the stack size, so each runs in constant time O(1). .
What is the time complexity of enqueue in a queue implemented with a linked list that has head and tail pointers?
O(1)
O(n^2)
O(log n)
O(n)
When both head and tail pointers are maintained, adding a new node at the end requires updating only the tail pointer and the new node's next reference. This constant set of operations gives O(1) time. .
What is the time complexity of finding the maximum element in an unsorted array?
O(n)
O(1)
O(n log n)
O(log n)
To find the maximum in an unsorted array, each element must be inspected at least once. This linear scan takes time proportional to the number of elements, giving O(n). .
What is the time complexity of binary search in a sorted array?
O(n log n)
O(log n)
O(1)
O(n)
Binary search halves the search interval at each step, so the number of comparisons grows logarithmically with the array size. Thus, its time complexity is O(log n). .
What is the average-case time complexity of merge sort?
O(n log n)
O(log n)
O(n)
O(n^2)
Merge sort divides the list and merges sorted halves, performing O(n) work per level across log n levels. This yields O(n log n) for both average and worst cases. .
What is the worst-case time complexity of quicksort?
O(n^2)
O(n log n)
O(n^3)
O(n)
In the worst case (e.g., when pivot picks are poor), quicksort partitions into sizes 1 and n-1, leading to recurrence T(n)=T(n?1)+O(n), which solves to O(n^2). .
What is the time complexity of inserting an element into a binary heap?
O(log n)
O(1)
O(n log n)
O(n)
Inserting into a binary heap adds the element at the end and then performs 'heapify-up' through the tree height. Since height is O(log n), insertion is O(log n). .
What is the average-case time complexity of hash table lookup?
O(log n)
O(n)
O(1)
O(n log n)
With a good hash function and low load factor, most lookups involve computing a hash and accessing a bucket directly, resulting in average time O(1). .
What is the time complexity of breadth-first search (BFS) in terms of vertices V and edges E?
O(V)
O(E)
O(V + E)
O(V * E)
BFS visits each vertex and inspects each edge once in an adjacency list representation. Therefore, its time complexity is O(V + E). .
What is the time complexity of depth-first search (DFS) in a graph represented by adjacency lists?
O(V)
O(V * E)
O(V + E)
O(E)
DFS explores each vertex and edge exactly once in an adjacency list graph. This leads to a combined time of O(V + E). .
What is the time complexity of the naive recursive algorithm for computing the nth Fibonacci number?
O(2^n)
O(n^2)
O(n log n)
O(n)
The naive recursive Fibonacci calls itself twice per non-base call, leading to the recurrence T(n)=T(n-1)+T(n-2)+O(1), which solves to O(2^n). .
What is the best-case time complexity of insertion sort on an almost sorted array?
O(1)
O(n log n)
O(n)
O(n^2)
In the best case, each element only needs one comparison as it's already in place. The outer loop still runs n times, giving a best-case of O(n). .
What is the amortized time complexity of appending an element to a dynamic array (e.g., ArrayList)?
O(1)
O(n)
O(n log n)
O(log n)
Although occasional resizing takes O(n) time, most append operations are constant time. Spread over many operations, the amortized cost per append is O(1). .
What is the time complexity of building a binary heap (heapify) from an unsorted array?
O(log n)
O(n^2)
O(n)
O(n log n)
The bottom-up heap construction uses sifting down operations whose costs sum to O(n), not O(n log n). This yields a linear time heapify. .
What is the time complexity of Dijkstra's algorithm using a binary heap for a graph with V vertices and E edges?
O(E log V)
O(V^2)
O(E + V)
O((E + V) log V)
With a binary heap, each edge relaxation costs O(log V) for decrease-key or insert. Summed over E edges and V extractions, the time is O((E + V) log V). .
What is the time complexity of counting sort for n elements and range k of input values?
O(k log n)
O(n k)
O(n log k)
O(n + k)
Counting sort builds a count array of size k and then processes n elements and k counts. This two-phase process yields O(n + k) time complexity. .
What is the time complexity of radix sort for n numbers with d digits using counting sort as a subroutine?
O(dn)
O(n^2)
O(d n log n)
O(n log n)
Radix sort processes each of the d digit positions using counting sort on n elements. Each pass costs O(n + k), but k is constant for digit range, so overall O(dn). .
What is the time complexity of the Floyd-Warshall algorithm for all-pairs shortest paths on a graph with n vertices?
O(2^n)
O(n^2 log n)
O(n^2)
O(n^3)
Floyd-Warshall uses three nested loops over n vertices to update path costs, resulting in O(n^3) time complexity. .
What is the time complexity of Kosaraju's algorithm for finding strongly connected components in a directed graph?
O(E log V)
O(V^2)
O(V * E)
O(V + E)
Kosaraju's algorithm performs two DFS traversals and graph reversal, each taking O(V + E) time in adjacency list representation. Therefore, the overall time is O(V + E). .
What is the average-case time complexity of interpolation search on a uniformly distributed sorted array?
O(log log n)
O(log n)
O(n)
O(n log n)
Interpolation search estimates the position based on key value, reducing search to O(log log n) on uniformly distributed data. Worst-case can degrade, but average-case is O(log log n). .
What is the time complexity of naive matrix multiplication for two n x n matrices?
O(2^n)
O(n^3)
O(n^2)
O(n^2 log n)
The naive algorithm multiplies each row of the first matrix by each column of the second, performing n multiplications per pair and iterating over n^2 pairs, yielding O(n^3). .
What is the approximate time complexity of Strassen's matrix multiplication algorithm?
O(n^2)
O(n^2.8074)
O(n^3)
O(n log n)
Strassen's algorithm reduces the number of multiplications from 8 to 7 in recursive steps, giving a time complexity of approximately O(n^{log2 7}), which is about O(n^2.8074). .
What is the time complexity of the SA-IS algorithm for suffix array construction on a string of length n?
O(n log n)
O(n^2)
O(n)
O(n + log n)
SA-IS constructs suffix arrays in linear time by induced sorting of suffixes and recursive reduction, achieving O(n) complexity. It is one of the fastest known practical algorithms. .
0
{"name":"What is the time complexity of accessing an element in an array by index?", "url":"https://www.quiz-maker.com/QPREVIEW","txt":"What is the time complexity of accessing an element in an array by index?, What is the worst-case time complexity of linear search in an unsorted array?, What is the time complexity of inserting a new node at the beginning of a singly linked list?","img":"https://www.quiz-maker.com/3012/images/ogquiz.png"}

Study Outcomes

  1. Understand Big O Fundamentals -

    Gain a clear grasp of time and space complexity notation to evaluate algorithm performance effectively.

  2. Analyze Algorithm Complexities -

    Examine code snippets to determine their asymptotic time and space requirements in practical scenarios.

  3. Apply Complexity Analysis -

    Compute Big O for loops, nested structures, and recursive functions to predict their scalability.

  4. Compare Algorithm Efficiency -

    Rank different algorithms by their performance profiles to choose the optimal solution for a given problem.

  5. Identify Optimization Opportunities -

    Spot performance bottlenecks and propose algorithmic improvements to reduce computational cost.

  6. Prepare for Technical Interviews -

    Build confidence in tackling Big O notation questions commonly asked in coding assessments and interviews.

Cheat Sheet

  1. Asymptotic Basics -

    Understanding Big O focuses on the upper bound of an algorithm's growth. For instance, 3n+2 is O(n) because higher-order terms dominate for large n (CLRS, 2009). When tackling big o notation practice, remember to drop constants and lower-order terms to simplify your analysis.

  2. Common Complexity Classes -

    Familiarize yourself with classes like O(1), O(log n), O(n), O(n log n), and O(n²) when solving big o practice problems. Use the mnemonic "1 LOG N Nancy Naps" to recall the order of growth. According to MIT OpenCourseWare, recognizing these patterns fast improves your performance on a big o notation quiz.

  3. Loop and Nested Loop Analysis -

    Single loops typically run in O(n), while nested loops often lead to O(n²) time complexity, as shown in CLRS examples. Break down each loop layer and multiply their costs for accurate evaluation. This technique is essential for clear reasoning in any big o practice scenario.

  4. Recurrence Relations and the Master Theorem -

    Use recurrence relations like T(n)=aT(n/b)+f(n) and apply the Master Theorem to solve divide-and-conquer algorithms in O(n log n) or O(n). Stanford's CS library provides extensive recurrence examples for guidance. Mastering this ensures success in algorithm efficiency quizzes and assessments.

  5. Amortized vs Worst-Case Analysis -

    Distinguish between worst-case time (e.g., O(n) for a single hash insertion) and amortized time (average O(1) across many operations) in data structures like dynamic arrays and hash tables. Carnegie Mellon's algorithm course highlights how amortized analysis smooths out spikes in cost. Recognizing this difference boosts confidence in both big o notation practice quizzes and real-world coding challenges.

Powered by: Quiz Maker