Most of the time, clients come to us with a specific request or problem and we try to find the best solution. Then we incorporate it in the next release of SuperTest. But we do much more than only work on a ‘you ask, we answer’ model. We like our job, so we are always working on new test capabilities to stay ahead of the changing compiler landscape. And that will always lead to some exiting new insights.

For the past year or so, we’ve been working on enhancing SuperTest’s ability to test compiler optimizations. Embedded system developers know that compiler optimizations have huge economic value. Compiler optimized code can accelerate program execution by a factor of fifteen or more. For portable devices that could translate into a fifteen times smaller battery! From the language perspective, of course, optimizations don’t exist. Language definitions such as C or C++ only define language behavior. They say nothing about how that behavior is achieved. Optimization is a so-called ‘non-functional requirement’.

For a typical compiler, more than half of the source code is optimization related. That is why all that code must be pretty robust and well tested, but this is not always the case. Our recent experiments with optimization testing have uncovered errors in many of the compilers we have got our hands on. We’re already doing a good job of testing optimizing compilers, especially when it comes to meeting the formal requirements of functional safety standards. But our ultimate goal is far more ambitious. We don’t just want to meet formal requirements, we want to create a high quality optimization test suite with widespread applicability in myriad different use cases.

The story so far – benchmarks are not good tests

To gain insight into the code that triggers optimization, we turned to benchmarks; some of them well known to compiler developers. From this, we learned a number of lessons that can be summarized as follows: benchmarks are not good tests for the correctness of compiler optimizations. This is because benchmarks don’t always verify their results; they are not written to deal with different data models; they may not be free of undefined behavior; and last but certainly not least, they don’t execute all the generated code. The first three in that list are properties of the test’s source code, so they can be fixed by considering each test by itself. But the last problem – failing to execute all the generated code – is dependent on the optimization transformations applied by the compiler. This is what our new optimization test suite is engineered to solve.

In this project, we have found juicy optimization errors in many of the compilers we have analyzed. This does not mean that optimizations cannot be trusted in general, but that it pays to tread carefully. The most important conclusion is that if you want to fully leverage compiler optimizations, you need to know the weaknesses of your compiler. Fortunately, SuperTest takes you a long way down that road.

Solid Sands’ CTO, Marcel Beemster has written a whitepaper titled ‘Compiler verification, benchmarks and optimization errors’. If you want to know more about this topic download the whitepaper or please contact us.

Click here to read the whitepaper Click here to contact us


Subscribe to our monthly blog!