Responding to the original question, which noted the juxtaposition of the standards committee trying their hardest to not making breaking changes, even though companies try their hardest to not update compilers anyway.
I think it's inevitable because of C++'s long history, and just how a business works.
The business knows that its program works and does its job using
that particular compiler version.
"If it ain't broke, don't fix it."
Of course, a product should have regression testing against patches/changes, but you won't necessarily have 100% test coverage, especially if your system works with other real systems, that can only be simulated in testing.
C++ is used in so many different places, some quite low-level, and it sometimes just isn't worth the time, effort, and risk to guarantee that the newer compiler doesn't break existing code, even if the potential bug introduced was programmer's fault, and not C++'s fault.
e.g. when I updated from .NET 4.6.2 to .NET 4.7.2, something relating to ASP.NET broke. I forget where the fault lay. Happens in more than just C++, although I'd say C++ is particularly bad because it has more gaps where undefined or implementation-defined behavior can happen.
As far as how the standards committee makes changes... that I know less about. I know they do a thing where they initially have a vote on a feature, and the votes are {strongly in favor, in favor, neutral, against, strongly against}. They take into consideration the opinions of the against and strongly against even if there there more in favor, and back-and-forth discussion comes to see if a resolution can be made.
A recent CppCon talk about the difficulties in changing the standard to improve two places that currently aren't zero-overhead in C++ (exceptions and RTTI):
https://www.youtube.com/watch?v=ARYP83yNAWk