An excerpt from a discussion on the TI MSP430 Forums regarding trade-offs between sticking with an old toolchain that you’re used to, and continually updating:
You can stick with an existing, “well understood” system, and assume that you’re safe because it passes what you think is important to test. Or you can keep up to date with what’s provided by a vendor (who sees a lot more use cases and variations than you do). This is a management choice.
All I can say is that, in my own multi-decade experience, the biggest long-term source of destabilization comes not from regular updates to the current toolchain, but from staying with old tools until something happens that forces you to make a multi-version jump to a new compiler. (And I agree that a new version is a new compiler and cannot just be assumed to work; this is why one should develop complete regression suites with test harnesses to check the “can’t happen but actually did once” situations.) I can’t see what happens in proprietary systems, but it’s been many years since an update to GCC has resulted in my discovery of an undesirable behavioral change that wasn’t ultimately a bug in my own code, with the fix improving quality for that code and all code I’ve worked on since.
If you’re operating in a regulated environment where the cost of updating/certifying is prohibitive, so be it. Best approach in that case is to keep with the toolchain used for original release of the product, and release new products with the most recent toolchain so you’re always taking advantage of the best available solution at the time.
I’m not saying there’s a universally ideal policy, e.g. that you should always use the current toolchain. I am saying that a shop that develops and releases new products using old toolchains without a strong reason behind that decision is not using best practices and is likely to produce an inferior product. If management thinks they’re saving money and reducing risk by not updating, there’s a good chance they’re being short-sighted.