Lets say that there is a large project that can be segmented into "modules". Is there any known performance gain between compiling the modules into several static libraries and then linking, as apposed to linking all of the object files into the executable directly?
Definitely will be a performance gain when compiling.
Not really.
Suppose you have a program A that uses files 1.cpp, 2.cpp, and 3.cpp. 2.cpp and 3.cpp are first linked into a static library which is then linked with 1.o to produce the executable.
There's a different program B that uses the same files, but simply links all three in one step.
If only 1.cpp changes before the next rebuild, the build system only needs to compile 1.cpp and link the executable to build either A or B. Sources that don't change are not recompiled.
If either 2.cpp or 3.cpp change, the build system for A needs to recompile that source and then regenerate the intermediate static library before being able to link the executable (3 steps). The build system for B only needs to recompile the one source that changed to be able to link the executable (2 steps).
This only makes sense if 2.cpp and 3.cpp are expected to change very infrequently.
From a more conceptual point of view, you shouldn't divide a project into separate libraries unless the subsystem you're externalizing doesn't depend on any other part of the project.
1 2 3 4 5 6
OK to externalize module B: (module A) ===calls code from===> (module B)
Not OK to externalize module B: (module A) ===calls code from==========\
^ |
| v
'=======calls code from======= (module B)
The project is implemented in a way that changing one of the cpp files will only require a recompile of that module, and a re-linking.
As far as performance, I was thinking more in terms of execution performance. Will one method produce a faster running program, will one produce a smaller executable, etc.