I have a question...will be glad if anyone can help :)
Theoretically can hash table be a really nice way to reduce computation time?
I have a data file some 30 mb .. well indexed .. hash function is really simple which gives me an index value
I only have to open file, go to index and read aprox 50 data starting at that index.
Alternative way is to generate those 50 data in runtime by some complex calculation.
which one do you suggest will be better?
50 data , each data is 3 bytes maximum
Also will i reduce computation further if i store each set of data in separate data files and index tells me to read one of those files.