In this ICS4U Grade 12 Computer Science lesson you will be learning how to
Back in the recursion lesson you were introduced to the timeit library in python that is a pretty straightforward way to calculate how long a certain piece of code takes to execute.
import timeit
start = timeit.default_timer()
#Code to find the running time for
end = timeit.default_timer()
#Find the time it took the code to run
time = end - start
The time is recorded in seconds. Since the code blocks you are looking at likely execute pretty quickly, it might be better to look at a different unit such as milliseconds, microseconds, or nanoseconds.
To Convert:
import timeit
start = timeit.default_timer()
#Code to find the running time for
end = timeit.default_timer()
#Find the time it took the code to run
time = end - start
timeMilli = time * 10**3
timeMicro = time * 10**6
timeNano = time * 10**9
Feel free to round the answers as well if it makes it easier to analyze. Since 20 microseconds and 20.1 microseconds and 20.2 microseconds aren’t that different.
The run time is going to be affected by a lot of different factors that have nothing to do with how “efficient” your code is. Running the same code on different machines with different processors will produce different run times. Even running the code on the same machine will produce different run times depending on what else the processor is doing at the time. So just using raw run times to do comparisons, is not always the best approach. The data being analyzed can also have a significant effect on the run time of a particular algorithm.
So deciding on the “efficiency” of a particular algorithm just using raw run times isn’t the greatest idea. But it can definitely give you an initial sense of what algorithm might be more efficient in terms of execution time.
import random
import timeit
#Quick Sort Functions -> Fill in code yourself
def quickSort(a):
def sortPartition(a,0,len(a)-1)
def bubbleSort(a)
#Function to generate random list
def genRandomList(n,min,max):
a = []
for i in range(0,n):
a.append(random.randint(min,max))
return a
#Copy an existing list into a new list
def copyList(a):
newList = []
for i in range(0,len(a)):
newList.append(a[i])
return newList
def main():
#Generate a random list with 5000 elements between 0 and 20000
originalList = genRandomList(5000,0,20000)
#Need copies of the list to give to all the sorting functions because sort functions modifies original list
blist = copyList(originalList)
qlist = copyList(originalList)
#Time bubble sort on the list
start = timeit.default_timer()
bubbleSort(blist)
end = timeit.default_timer()
bubbleTimeMicro = int((end-start)*10**6)
print("Bubble Time:", bubbleTimeMicro)
#Time Quick sort on the list
start = timeit.default_timer()
bubbleSort(blist)
end = timeit.default_timer()
quickTimeMicro = int((end - start) * 10 ** 6)
print("Quick Time:", quickTimeMicro)
main()
Some notes on the code:
Conclusions:
Options to deal with variability
Its not always very practical to check the run time of an algorithm to test its efficiency. Especially if we need to take an averaging approach to remove some variability. Instead its easier to use a theoretical approach.
There are two types of analysis we could do
This course provides a brief introduction to Time Complexity Analysis, often referred to as Big O Notation. Big O Notation aims to provide a way to analyze an algorithm as it scales with larger amounts of data.
Best Case Complexity
Average Case Complexity
Worst Case Complexity
Best Case Complexity
Average Case Complexity
Worst Case Complexity
Best Case Complexity
Average Case Complexity
Worst Case Complexity
Best Case Complexity
Average Case Complexity
Worst Case Complexity
Best Case Complexity
Average Case Complexity
Worst Case Complexity
Best Case Complexity
Average Case Complexity
Worst Case Complexity
In general, you can assume the time complexity of pythons built in sorting function is O(n*log(n))