I have long been under the impression that looping over all values of a 32-bit integer will take a very long time no matter what operation you do on each iteration. |
It does. Assuming you can do 1 iteration in 1 cycle (read: you can't, it takes much longer)... you have 4 billion iterations... which means 4 billion cycles.
If you have a 4GHz processor, it would take 1 full second of using 100% cpu time. (EDIT: to clarify, 100% of a single thread)
But of course, like I say, any operation is going to take longer than 1 cycle... so it'll actually be much longer than that. If an iteration takes 10 cycles (more reasonable, but still probably shorter than it would actually be), that's 10 full seconds.
And that's assuming the bottleneck comes from the CPU (which it probably wouldn't)
So yeah. processing 4 GB of data is going to take a while.
However it occurred to me that we deal with large files (e.g. more than 2 or 4 GB) every day, and I realized it must be possible because we read and write in large chunks. |
This all depends on the program you're using and how the data is being handled.
Case in point.. try to open a 4 GB text file in Notepad.
Then try to open the same file in Notepad++.
Notepad will take several (10-15?) seconds to load.
Notepad++ will load it virtually instantly.
It's not that Notepad++ has some super secret way of reading the data faster. It's that
it doesn't try to read all the data at once. It only reads what it needs to in order to display to the user what they want to see.
Whereas Notepad will load the entire file into memory (slow) before showing anything to the user.
Smarter programs can do tricks like that to do the heavy processing 'on demand' or in the background so that the program stays responsive and fast.
Video processing also blows my mind. |
That's another topic entirely ;P