POPL 2025
Sun 19 - Sat 25 January 2025 Denver, Colorado, United States

The classic theory of static analysis aims to catch all problems by overapproximating the set of executions. In practice this has the unfortunate consequence that false positives are to be expected. Soon, developers learn to ignore what the static analysis says. To convince them that static analysis reports a real issue, it helps to connect the report to data collected at runtime. Here are two examples: if static analysis says that a crash is possible, then it helps to link that report to stack traces in a crash database; and if static analysis says that certain data can flow in a database, then it helps to link that report to a query on the database. Once developers are convinced that a problem is real, they are more likely to look at the static analysis report carefully, where they can find clues to the root cause of the problem.

In both examples so far, we assume that it is possible to link a static analysis report to some runtime evidence. But sometimes such a link is difficult to find. In such cases, static analysis reports can still be prioritized based on profiling data. If static analysis points to a possible performance problem, then it makes sense to first look at those reports that point to code executed frequently. Conversely, if static analysis points to a possible reliability or security problem, then it makes sense to look first at those reports that point to code executed infrequently.

But the ultimate way to deal with false positives is to turn the theory on its head and make it aim to underapproximate the set of executions. In practice, now we have a coverage problem and — once again — data collected at runtime comes to rescue. The key idea here is that if a piece of code is too complicated for static analysis then we can collect some input–output behaviors of that piece of code, and feed it back into static analysis to improve its coverage. We have done this in a combination of Infer and FAUSTA.