This isn't the standard ulimit problem - I've triple checked all the configuration for that.
The core file produced by my app seems to get truncated to about 270MB when the code fails, and yet when I send a SIGSEGV into the process I get a full 700MB core (representing the entirety of the address space). My best guess is that the code is actually crashing twice with the second occurrence causing the core dump to be terminated.
I have a signal handler function set up and successfully intercepting all signals - is there a way that I can somehow force the generation of a proper core in that function? (i.e. prevent the presumed second crash from terminating the dump early)
The main obstacle I'm facing is that I can't replicate the crash in a test environment, and yet lately it's been happening in production more and more frequently.