I'll keep things simple with my question: I have a client-server app i'm writing in which the client will send commands to the server to be executed. My question is which is a better method of writing the program:
1) Checking the commands for correctness client-side and issuing an error meaning the command does not get sent to the server.
2) Checking the command for correctness server-side and letting the server issue an error message.
Let the server check the command for correctness, but don´t send an error message, just send the completion indication or throw the false command away, silently, so any evil person does not know what´s wrong with their fake...
It depends on what the server does and how security critical it is.
If it is supposed to execute a binary file but can't find it, this is an error the client cannot handle prior to sending it. Informing the client can help the user figure out what is going on. (as with everything this can be misused )
On the other hand if the client is able to pre-parse the commands and check for errors it should do that also since it will save bandwidth.
With a little bit of knowledge about what the client-server actually do, we might be able to help you further improve on the idea.
Fair enough. This will probably make you laugh somewhat but it was in fact an idea given to me by helios. The program is a remote desktop wallpaper changer that is (well... will be) written so that it can be opened as a client or a server, as oppose to having to seperate programs.
The style the program uses to navigate directories and error messages to be output mimic (more or less) that of the Windows system. e.g. commands such as "cd", "dir" etc...
After reading RedX's post I'm assuming that you will most likely choose option 1 as the best method.