Benchmarking

I made this benchmark-esque program.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
#include <iostream>
#include <time.h>

int main()
{
    int repeat = 1;
    
    while (repeat == 1){
    
    float fin, average;  
    double i = 0;  
    int seconds = 1, tests = 0, j, testNum = 100, counter, choice = 0, Num;        
    float testAver[testNum];    
    
    std::cout << "How many tests would you like to do?\n> ";
    std::cin >> testNum;
    std::cout << "How many tens of millions would you like to count to?\n> ";
    std::cin >> Num;
    
    Num = (Num * 10000000);
    counter = testNum;    
    
    std::cout << "\nPlease do NOT use this computer during the test\n\n";
    std::cout << "To cancel press Control+C\n\nWhen ready press ENTER\n";
    std::cin.get();
    std::cin.get();
    std::cout << "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n";
    
    while (tests != testNum){ 
    std::cout << "\nPlease wait...\t\tCounter: " << counter-- << std::endl;;      
    i = 0;     
    clock_t counter1;
    counter1 = clock () + seconds * CLOCKS_PER_SEC;  
    while (i != Num) 
    {   
    i++; 
    } 
    clock_t counter2;
    counter2 = clock () + seconds * CLOCKS_PER_SEC;
    fin = (counter2 - counter1);
    testAver[tests] = (fin/1000);
    tests++;  
    }

    std::cout << "\nFinish\n";    
    std::cout << "Number counted to: " << Num << std::endl;
    std::cout << "Number of times tested: " << testNum << std::endl;

    while (choice != 3){
          
    std::cout << "\nWould you like to see the average time(1), detailed statistics(2), or exit(3)?\n> ";
    std::cin >> choice;
    
    if (choice == 2){
    for (j = 0; j != testNum; j++){    
    std::cout << "\nTest " << (++j) << " took " << testAver[--j] << " seconds\n";
    }    
}
  
    else if (choice == 1){    
    
    for (j = 0; j != testNum; j++){ 
    if (j == 0){    
    average = (testAver[j] + testAver[++j]);
    j--;
    }
    else
    {
    average = (average + testAver[++j]);
    j--;
    }   
}
    average = (average/testNum);
    std::cout << "\nAverage time to complete: " << average << " seconds\n"; 
}
}

    if (repeat == 1){
    std::cout << "\nWould you like to repeat(1) or exit(0)?\n> ";
    std::cin >> repeat;
}
} //<--- main repeat ending brace

    std::cout << "\nPress ENTER to exit\n";
    std::cin.get();
    std::cin.get();
    return 0;
    
}


Is it accurate or completely off?
It's not a benchmark. A benchmark tests how long a complex operation takes, often by measuring several times. For example, how long it takes to sort ten million elements ten times using a given algorithm.
I believe this thing here is called a bogoMIPS calculator. A bogoMIPS is "the number of million times per second a CPU can do absolutely nothing". The question of whether it's accurate is almost meaningless. It's accurate in the sense that yes, that's how long it took to run that code, but that doesn't really tell you anything about how long anything else will take.
Last edited on
I see...

I used it to compare this to another programming language to see which one could complete the task quicker. Is that a fair way to compare?
Not really. Typical cross-language performance comparisons use complex operations that use different types of basic operations. Here, you're only using increments, comparisons, and jumps.

http://shootout.alioth.debian.org/
Thanks for the link. I used to compare this to Ruby and there was a 33 second difference. I guess that's why I figured it was sufficient enough. If it had been closer I probably I would have realized it's uselessness. Another couple of hours wasted :S
Just a tip for comparing performance: don't measure the difference, measure the ratio. A 33 second difference isn't the same when the faster algorithm took a day to finish than when it took 4 ms.
A 33 second difference isn't the same when the faster algorithm took a day to finish than when it took 4 ms.


I'm sorry but I don't really understand what you mean...
If algorithm A takes .004 s and algorithm B takes 33.004 s, B is 8251 times slower than A. If A takes 86400 s and B takes 86433 s B is very probably as fast as A; the difference is so small (~0.04%), it could be produced by non-deterministic factors like cache-misses, page faults, etc.
Oh, now I see what you mean. In this test it was more constant. Always under 1 second with this and always over 30 seconds with the Ruby version but thanks for all of the help.
Topic archived. No new replies allowed.