Static Analysis vs. Dynamic Analysis

Static analysis and dynamic analysis complement each other nicely. Strengths and weaknesses of the two approaches are considered in the table below.

N Static Analysis Dynamic Analysis
1 Source or binary code of the module is analyzed without being executed. The data are collected and (if necessary) analyzed during the execution of the code from the module. Analysis can be performed during the execution (runtime analysis) or after it using the data collected in runtime (post factum or postmortem analysis).
2 (follows from the above) No hardware and software specific to the kernel module under analysis is necessary. (follows from the above) If the module under analysis requires specific hardware or software, it should be provided or emulated. This can be a problem if it is necessary, for example, to analyze a driver for some rare network card and that network card is not available. Nevertheless, this is usually not significant for driver developers (they probably have access to the required hardware anyway) and for certification systems from big vendors like Microsoft, SUSE/Novell, Red Hat, Google, Canonical or the like (they usually require to submit all necessary hardware and software).
3 Non-trivial static analysis usually requires significant amount of resources (time, computing power) to be performed. This depends heavily on the complexity of the analysis performed. The overhead and the amount of resources necessary for dynamic analysis may vary depending on the complexity of the analysis and the amount of data being collected. In general, neither static nor dynamic analysis is better at all times with respect to the amount of resources needed but dynamic analysis often requires less resources.
4 Source code of the analyzed module is usually required (although static analysis can be performed on the binary code too, it seems to be more difficult and is rarely used for the kernel modules). Source code of the modules under analysis is usually not required. This allows to analyze also closed-source kernel modules (although there are not so many of those in Linux), the modules built for some exotic configurations, etc.
5 Static analysis tools for the developers of kernel modules (usage model).

- Lightweight static analysis tools (like PREfast on MS Windows or Sparse on Linux) can be used by the developers from the very early stages of work on the kernel modules. The tools can be used on the developers' machines and often can be on most of the time.

- Heavyweight tools (like Static Driver Verifier on MS Windows or LDV on Linux) performing more complex static analysis should probably be used when most of the functionality of the module is in place. Such tools can sometimes be used on the developers' machines but they are more likely to be deployed on the specialized servers. From time to time, the developers submit the source code of their modules there and then receive the results.

Dynamic analysis tools for the developers of kernel modules (usage model).

The dynamic analysis tools can be used by the developers of kernel modules on their development machines as soon as the modules can run and can do anything useful. If the modules deal with some specific hardware and/or software, it can be difficult to deploy the dynamic analysis tools on the external servers that would provide appropriate services.

The tools like Driver Verifier on MS Windows or KEDR on Linux can also be used on the users' machines to help collect data about possible errors in the kernel modules there. There are great many configurations that the users' systems may have and it is not always possible to reproduce all these configurations in the development labs. It can be helpful to collect required data on the users' machines directly using the dynamic analysis tools and send the data to the development lab where they would be analyzed.

Another area where dynamic analysis tools can be applied is development of open source analogues of proprietary modules (this is how Mmiotrace is used for development of Nouveau graphics drivers for NVidia video cards).

6 Static analysis tools for the kernel module certification systems (usage model).
Static analysis tools can be used in the certification systems but so far they have not been mandatory (this is how Static Driver Verifier is used on MS Windows). This is probably because of possible false positives.

The usage model can be as follows. The analysis system is deployed on a server that provides appropriate services. Using these services, one can submit the source code of the module to be analyzed and then receive the results.

Dynamic analysis tools for the kernel module certification systems (usage model).
Dynamic analysis tools can be used in the certification systems. Such tools usually produce less false positives than static analysis systems, so if they detect any errors in the analyzed modules, it can be the reason to consider certification failed. This is how Driver Verifier is used on MS Windows (Windows Logo Program) and Novell "API Swapping" facilities are used on Linux (Novell YES Certified Program).

The usage model can be as follows. The certification system loads the dynamic analysis tools and the module under analysis and then runs a series of tests on that module. The tools monitor (and sometimes alter) the execution of the module and detect errors.

7 Many (or even all) paths of execution in the code can be checked at the same time. Only one path of execution can be checked at a time in the code.
8 (follows from the above) Given enough time and computing power, most if not all the errors in the module can be revealed. But the important question is how much time and resources are necessary for each particular case. (follows from the above) Only the errors that occur in the path actually executed can be detected.
9 More suitable when it is enough to analyze relatively many short paths in the code (the length can be limited by "state explosion" or similar problems). More suitable when it is necessary to analyze the execution paths in the code from the beginning to the end.
10 It is often beneficial to use static analysis tools to detect missing error handling. A typical example is the situation when it is not checked if a function completed successfully: whether a memory allocation function returned NULL or not, whether a function registering some facility returned nonzero, etc. Dynamic analysis tools have usually no significant benefits in detection of missing error handling compared to static analysis.

But as far as other "problems in error path" are concerned (use-after-free, etc.), there is no clear winner here. Again, it heavily depends on how long the path in the code is that should be analyzed to detect a problem.

11 Static analysis tools are also well suited to detect usage of some operation in the conditions where it is not allowed. A common example is using a function that might sleep (or, generally, might cause rescheduling) in atomic context, that is, in interrupt handlers, in critical sections guarded by spinlocks and the like. Dynamic analysis tools provide little to no benefits when detecting usage of operations in incorrect conditions. This is because such tools can detect the errors of this sort only right before or even after the operation is performed.
12 Both static and dynamic analysis tools are now used to detect errors like race conditions, incorrect memory accesses, resource leaks. Both static and dynamic analysis tools are now used to detect errors like race conditions, incorrect memory accesses, resource leaks.
13 As static analysis tools do not require the analyzed module to operate, it can be safer to use static analysis if it is known that the module under analysis can, for example, severely damage the system in case something goes wrong. Dynamic analysis is sometimes less safe than static analysis because the errors in the analyzed module may have unpredictable consequences when the module operates.
14 In many cases, static analysis of kernel modules is more likely to give false positives than dynamic analysis (and sometimes the number of false positives can be unacceptable). There might be at least the following reasons for this:

- To perform static analysis of a kernel module, a model of environment where the analyzed module executes is needed (e.g. it can be necessary to model how the underlying hardware works, etc.). The model is usually not 100% accurate, it is a model, anyway.

- If a particular module is analyzed, the models of the modules it interacts with as well as of the kernel proper are also needed - and they are usually also not accurate.

Generally, dynamic analysis tools give less false positives than static analysis tools (or even no false positives at all).

Still, false positives can be produced by dynamic analysis tools. For example, many data race detectors are known to give false positives when the analyzed system uses non-standard synchronization facilities.