Approximate computing: Design & test for integrated circuits
Résumé
Todays' Integrated Circuits (ICs) are starting to reach the physical limits of CMOS technology. Among the multiple challenges arising from technology nodes lower than 20nm, we can mention high leakage current (i.e., high static power consumption), reduced performance gain, reduced reliability, complex manufacturing processes leading to low yield and complex testing procedures, and extremely costly masks. In other words, ICs manufactured with the latest technology nodes are less and less efficient (w.r.t. both performance and energy consumption) than forecasted by the Moore's law. Moreover, manufactured devices are becoming less and less reliable, meaning that errors can appear during the normal lifetime of a device with a higher probability than with previous technology nodes. Fault tolerant mechanisms are therefore required to ensure the correct behavior of such devices at the cost of extra area, power and timing overheads. Finally, process variations force engineers to add extra guard bands (e.g., higher supply voltage or lower clock frequency than required under normal circumstances) to guarantee the correct functioning of manufactured devices. In the recent years, the Approximate Computing (AC) paradigm has emerged. AC is based on the intuitive observation that, while performing exact computation requires a high amount of resources, allowing selective approximation or occasional violation of the specification can provide gains in efficiency (i.e., less power consumption, less area, higher manufacturing yield) without significantly affecting the output quality [1]. This invited talk will first discuss the basic concepts of the AC paradigm and then will focus on the Design & Test of Integrated Circuits for AC-based systems. From the design point of view, a methodology able to automatically explore the impact of different approximation techniques on a given input algorithm will be presented. The methodology does not need to specify which part(s) of the algorithm should be approximated and how. It only requires the definition of the acceptable output degradation from the user. Aiming at defining a general method for considering arbitrary approximate computing techniques, IDEA has been developed, which is a design exploration tool able to find approximate versions for an algorithm described in software (i.e., coded in C/C++), by mutating the original code [2]. The mutated (i.e., approximated) version of the algorithm is then synthetized by using a High-Level Synthesis tool to obtain a HDL code. The latter is finally mapped into an FPGA to estimate the benefits of the approximation in terms of area reduction and performances. IDEA is composed of two main components: (i) a source-to-source compiler, the Clang-Chimera, which analyzes the Abstract Syntax Tree (AST) to apply user-defined code mutators, and (ii) a Branch & Bound optimizer, in order to search for the best functional approximation version of the given C/C++ code. Regarding the test of integrated circuits, we start from the consideration that AC-based systems can intrinsically accept the presence of faulty hardware (i.e., hardware that can produce errors). This paradigm is also called “computing on unreliable hardware”. The hardware-induced errors have to be analyzed to determine their propagation through the system layers and eventually determining their impact on the final application. In other words, an AC-based system does not need to be built using defect-free ICs. Indeed, AC-based systems can manage at higher-level the errors due to defectives ICs, or those errors simply do not significantly impact the final applications. Under this assumption, we can relax test and reliability constraints of the manufactured ICs [3]. One of the ways to achieve this goal is to test only for a subset of faults instead of targeting all possible faults. In this way, we can reduce the manufacturing cost since we eventually reduce the number of test patterns and thus the test time.