I wants to read a text file which may be having 90,000 of lines but each line is having just a single word.
I want to read that file using fstream, I wants suggestions how i can handle this thing efficiently. Should I read the entire file in one go or should I read it line by line using getline() function.
Any suggestions for efficiently handling the files will be greatly appreciated.
If you need to search through it, reading each line can help find the text.
If you have the space reading it all in can save time if you need to do more with it.
Its between efficiency and resources. The more you go to the disk, the slower it will be. But if you read the full file in one go, your application may run out of memory. If you are sure that the file size will not go beyond a certain size and that is in memory bounds, then you can read the full file in one go.
If the size is going to be very big, it is better to read it in chunks. Check the size and not the number of lines. A file with 1L lines and having only one word in each line would not be a very big file though. Will be around a couple of MB's only [(100000 * ~6) bytes].
If its a normal application, getline() would give you a good performance. But if its a time critical application, you will have to think about other ways. Like reading the file in a raw buffer and parallely processing this buffer with multiple threads.
That might depend on the machine its going to run on. Many of the new ones can handle some large file sizes. I believe I would go with 64meg at a time. You can increase or decrease based on your wait time.