What kind of time, meaning, which time should I prefer to keep track of time for each algorithm? (Like seconds, milliseconds, nanoseconds, or something that I can keep tracking of) I want to find what should I prefer that can be accurate for the elapsed time of each sorting algorithm.
Secondly, how can I average the time after 5 tests for each sorting algorithm?
your sorted array is tiny so you need some awfully tiny unit of time to see what is going on for any interesting algorithm. Its not any one answer here... its whatever makes sense to you for your output to be able to compare them. I personally threw in the towel years ago and just give it in seconds in scientific notation.
use <random> … everything else is c++ and then you stick the C random generation in there... with autos..
I can't tell, are you sorting sorted arrays after the first trial? Be sure to do this pattern:
seed random generation (same seed over and over, so all sorts tested on same array)
fill array
test a sort
repeat above.
otherwise the sorted array will give best possible case results, not random results, giving all but the first a serious advantage (several algorithms, even a couple of the N*N ones, are just N for presorted arrays).
average time works like anything else.
totaltime += first test
totaltime += second test
… blah blah
avgtime = totaltime/number of runs
be sure to re random the array for doing a loop of 5 over the same sort for an average too. Caching makes timing subsequent runs a pain. You get a huge lift after the first run from hardware/OS tricks. If you want to average it more carefully, do each test once, then each test again, … 5 times that way. The different algorithms will help reduce some of this effect.
I believe you need about 5 million items to get tenths of seconds on a low end PC if you compiled with max optimize settings.
Ok, thanks for that. So you mean, any units of time that would be more accurate for this program to work. Is that correct?
I don't understand what you mean. Time is time. Its as accurate as your CPU clock and timer tool allow. Displaying it, you need enough digits to see how long they took relative to each other (again, sci notation seems like the easy fix).
yes, add the time and divide by trials. I said that. But you want to break it up so you don't call the same algorithm back to back, or the computer will cheat and make it faster than a cold start would be with caching.