[56] | 1 | % vim:set spell: |
---|
| 2 | % vim:spell spelllang=en: |
---|
| 3 | |
---|
[12] | 4 | Our project covers several critical domains in system design in order |
---|
| 5 | to achieve high performance computing. Starting from a high level description we aim |
---|
| 6 | at generating automatically both hardware and software components of the system. |
---|
| 7 | |
---|
| 8 | \subsubsection{High Performance Computing} |
---|
[56] | 9 | % Un marché bouffé par les archi GPGPU tel que le FERMI de NvidiaCUDA programming language |
---|
| 10 | High-Performance Computing (HPC) world is composed of three main families of architectures: |
---|
| 11 | many-core, GPGPU (General Purpose computation on Graphics Unit Processing) and FPGA. |
---|
| 12 | The two first families are dominating the market by taking benefit |
---|
| 13 | of the strength and influence of mass-market leaders (Intel, Nvidia) |
---|
| 14 | %such as Intel for many-core CPU and Nvidia for GPGPU. |
---|
| 15 | In this market, FPGA architectures are emerging and very promising. |
---|
| 16 | By adapting architecture to the software, % (the opposite is done in the others families) |
---|
| 17 | FPGAs architectures enable better performance |
---|
| 18 | (typically between x10 and x100 accelerations) |
---|
| 19 | while using smaller size and less energy (and heat). |
---|
[12] | 20 | However, using FPGAs presents significant challenges~\cite{hpc06a}. |
---|
| 21 | First, the operating frequency of an FPGA is low compared to a high-end microprocessor. |
---|
| 22 | Second, based on Amdahl law, HPC/FPGA application performance is unusually sensitive |
---|
| 23 | to the implementation quality~\cite{hpc06b}. |
---|
[56] | 24 | % Thus, the performance strongly relies on the detected parallelism. |
---|
| 25 | % (pour résumer les 2 derniers points) |
---|
| 26 | Finally, efficient design methodology are required in order to |
---|
| 27 | hide FPGA complexity and the underlying implantation subtleties to HPC users, |
---|
| 28 | so that they don't have to change their habits and can have equivalent design productivity |
---|
| 29 | than in others families~\cite{hpc07a}. |
---|
| 30 | |
---|
| 31 | %état de l'art FPGA |
---|
[12] | 32 | HPC/FPGA hardware is only now emerging and in early commercial stages, |
---|
| 33 | but these techniques have not yet caught up. |
---|
[56] | 34 | Industrial (Mitrionics~\cite{hpc08}, Gidel~\cite{hpc09}, Convey Computer~\cite{hpc10}) and academic (CHREC) |
---|
| 35 | researches on HPC-FPGA are mainly conducted in the USA. |
---|
| 36 | None of the approaches developed in these researches are fulfilling entirely the |
---|
| 37 | challenges described above. For example, Convey Computer proposes application-specific instruction set extension of x86 cores in FPGA accelerator, |
---|
| 38 | but extension generation is not automated and requires hardware design skills. |
---|
| 39 | Mitrionics has an elegant solution based on a compute engine specifically |
---|
| 40 | developed for high-performance execution in FPGAs. Unfortunately, the design flow |
---|
| 41 | is based on a new programming language (mitrionC) implying designer efforts and poor portability. |
---|
| 42 | % tool relying on operator libraries (XtremeData), |
---|
| 43 | % Parle t-on de l'OPenFPGA consortium, dont le but est : "to accelerate the incorporation of reconfigurable computing technology in high-performance and enterprise applications" ? |
---|
| 44 | |
---|
[12] | 45 | Thus, much effort is required to develop design tools that translate high level |
---|
| 46 | language programs to FPGA configurations. |
---|
[56] | 47 | Moreover, as already remarked in~\cite{hpc11}, Dynamic Partial Reconfiguration~\cite{hpc12} |
---|
| 48 | (DPR, which enables changing a part of the FPGA, while the rest is still working) |
---|
| 49 | appears very interesting for improving HPC performance as well as reducing required area. |
---|
[12] | 50 | |
---|
| 51 | \subsubsection{System Synthesis} |
---|
| 52 | Today, several solutions for system design are proposed and commercialized. |
---|
| 53 | The most common are those provided by Altera and Xilinx to promote their |
---|
| 54 | FPGA devices. |
---|
| 55 | \\ |
---|
| 56 | The Xilinx System Generator for DSP~\cite{system-generateur-for-dsp} is a |
---|
| 57 | plug-in to Simulink that enables designers to develop high-performance DSP |
---|
| 58 | systems for Xilinx FPGAs. |
---|
| 59 | Designers can design and simulate a system using MATLAB and Simulink. The |
---|
| 60 | tool will then automatically generate synthesizable Hardware Description |
---|
| 61 | Language (HDL) code mapped to Xilinx pre-optimized algorithms. |
---|
| 62 | However, this tool targets only DSP based algorithms, Xilinx FPGAs and |
---|
| 63 | cannot handle complete SoC. Thus, it is not really a system synthesis tool. |
---|
| 64 | \\ |
---|
| 65 | In the opposite, SOPC Builder~\cite{spoc-builder} allows to describe a |
---|
| 66 | system, to synthesis it, to programm it into a target FPGA and to upload a |
---|
| 67 | software application. |
---|
| 68 | % FIXME(C2H from Altera, marche vite mais ressource monstrueuse) |
---|
| 69 | Nevertheless, SOPC Builder does not provide any facilities to synthesize |
---|
| 70 | coprocessors. System Designer must provide the synthesizable description |
---|
| 71 | with the feasible bus interface. |
---|
| 72 | \\ |
---|
| 73 | In addition, Xilinx System Generator and SOPC Builder are closed world |
---|
| 74 | since each one imposes their own IPs which are not interchangeable. |
---|
| 75 | We can conclude that the existing commercial or free tools does not |
---|
| 76 | coverthe whole system synthesis process in a full automatic way. Moreover, |
---|
| 77 | they are bound to a particular device family and to IPs library. |
---|
| 78 | |
---|
| 79 | \subsubsection{High Level Synthesis} |
---|
| 80 | High Level Synthesis translates a sequential algorithmic description and a |
---|
| 81 | constraints set (area, power, frequency, ...) to a micro-architecture at |
---|
| 82 | Register Transfer Level (RTL). |
---|
| 83 | Several academic and commercial tools are today available. Most common |
---|
| 84 | tools are SPARK~\cite{spark04}, GAUT~\cite{gaut08}, UGH~\cite{ugh08} in the |
---|
| 85 | academic world and CATAPULTC~\cite{catapult-c}, PICO~\cite{pico} and |
---|
| 86 | CYNTHETIZER~\cite{cynthetizer} in commercial world. Despite their |
---|
| 87 | maturity, their usage is restrained by: |
---|
| 88 | \begin{itemize} |
---|
| 89 | \item They do not respect accurately the frequency constraint when they target an FPGA device. |
---|
| 90 | Their error is about 10 percent. This is annoying when the generated component is integrated |
---|
| 91 | in a SoC since it will slow down the hole system. |
---|
| 92 | \item These tools take into account only one or few constraints simultaneously while realistic |
---|
| 93 | designs are multi-constrained. |
---|
| 94 | Moreover, low power consumption constraint is mandatory for embedded systems. |
---|
| 95 | However, it is not yet well handled by common synthesis tools. |
---|
| 96 | \item The parallelism is extracted from initial algorithm. To get more parallelism or to reduce |
---|
| 97 | the amout of required memory, the user must re-write it while there is techniques as polyedric |
---|
| 98 | transformations to increase the intrinsec parallelism. |
---|
| 99 | \item Despite they have the same input language (C/C++), they are sensitive to the style in |
---|
| 100 | which the algorithm is written. Consequently, engineering work is required to swap from |
---|
| 101 | a tool to another. |
---|
| 102 | \item The HLS tools are not integrated into an architecture and system exploration tool. |
---|
| 103 | Thus, a designer who needs to accelerate a software part of the system, must adapt it manually |
---|
| 104 | to the HLS input dialect and performs engineering work to exploit the synthesis result |
---|
| 105 | at the system level. |
---|
| 106 | \end{itemize} |
---|
| 107 | Regarding these limitations, it is necessary to create a new tool generation reducing the gap |
---|
| 108 | between the specification of an heterogenous system and its hardware implementation. |
---|
| 109 | |
---|
| 110 | \subsubsection{Application Specific Instruction Processors} |
---|
| 111 | |
---|
| 112 | ASIP (Application-Specific Instruction-Set Processor) are programmable |
---|
| 113 | processors in which both the instruction and the micro architecture have |
---|
| 114 | been tailored to a given application domain (eg. video processing), or to a |
---|
| 115 | specific application. This specialization usually offers a good compromise |
---|
| 116 | between performance (w.r.t a pure software implementation on an embeded |
---|
| 117 | CPU) and flexibility (w.r.t an application specific hardware co-processor). |
---|
| 118 | In spite of their obvious advantages, using/designing ASIPs remains a |
---|
| 119 | difficult task, since it involves designing both a micro-architecture and a |
---|
| 120 | compiler for this architecture. Besides, to our knowledge, there is still |
---|
| 121 | no available open-source design flow\footnote{There are commercial tools |
---|
| 122 | such a } for ASIP design even if such a tool would be valuable in the |
---|
| 123 | context of a System Level design exploration tool. |
---|
| 124 | \par |
---|
| 125 | In this context, ASIP design based on Instruction Set Extensions (ISEs) has |
---|
| 126 | received a lot of interest~\cite{NIOS2,ST70}, as it makes micro architecture synthesis |
---|
| 127 | more tractable \footnote{ISEs rely on a template micro-architecture in which |
---|
| 128 | only a small fraction of the architecture has to be specialized}, and help ASIP |
---|
| 129 | designers to focus on compilers, for which there are still many open |
---|
| 130 | problems\cite{CODES04,FPGA08}. |
---|
| 131 | This approach however has a strong weakness, since it also significantly reduces |
---|
| 132 | opportunities for achieving good seedups (most speedup remain between 1.5x and |
---|
| 133 | 2.5x), since ISEs performance is generally tied down by I/O constraints as |
---|
| 134 | they generally rely on the main CPU register file to access data. |
---|
| 135 | |
---|
| 136 | % ( |
---|
| 137 | %automaticcaly extraction ISE candidates for application code \cite{CODES04}, |
---|
| 138 | %performing efficient instruction selection and/or storage resource (register) |
---|
| 139 | %allocation \cite{FPGA08}). |
---|
| 140 | To cope with this issue, recent approaches~\cite{DAC09,DAC08} advocate the use of |
---|
| 141 | micro-architectural ISE models in which the coupling between the processor micro-architecture |
---|
| 142 | and the ISE component is thightened up so as to allow the ISE to overcome the register |
---|
| 143 | I/O limitations, however these approaches tackle the problem for a compiler/simulation |
---|
| 144 | point of view and not address the problem of generating synthesizable representations for |
---|
| 145 | these models. |
---|
| 146 | |
---|
| 147 | We therefore strongly believe that there is a need for an open-framework which |
---|
| 148 | would allow researchers and system designers to : |
---|
| 149 | \begin{itemize} |
---|
| 150 | \item Explore the various level of interactions between the original CPU micro-architecure |
---|
| 151 | and its extension (for example throught a Domain Specific Language targeted at micro-architecture |
---|
| 152 | specification and synthesis). |
---|
| 153 | \item Retarget the compiler instruction-selection (or prototype nex passes) passes so as |
---|
| 154 | to be able to take advantage of this ISEs. |
---|
| 155 | \item Provide a complete System-level Integration for using ASIP as SoC building blocks |
---|
| 156 | (integration with application specific blocks, MPSoc, etc.) |
---|
| 157 | \end{itemize} |
---|
| 158 | |
---|
| 159 | \subsubsection{Automatic Parallelization} |
---|
| 160 | % FIXME:LIP FIXME:PF FIXME:CA |
---|
| 161 | % Paul je ne suis pas sur que ce soit vraiment un etat de l'art |
---|
| 162 | % Christophe, ce que tu m'avais envoye se trouve dans obsolete/body.tex |
---|
[30] | 163 | %\mustbecompleted{ |
---|
| 164 | %Hardware is inherently parallel. On the other hand, high level languages, |
---|
| 165 | %like C or Fortran, are abstractions of the processors of the 1970s, and |
---|
| 166 | %hence are sequential. One of the aims of an HLS tool is therefore to |
---|
| 167 | %extract hidden parallelism from the source program, and to infer enough |
---|
| 168 | %hardware operators for its efficient exploitation. |
---|
| 169 | %\\ |
---|
| 170 | %Present day HLS tools search for parallelism in linear pieces of code |
---|
| 171 | %acting only on scalars -- the so-called basic blocs. On the other hand, |
---|
| 172 | %it is well known that most programs, especially in the fields of signal |
---|
| 173 | %processing and image processing, spend most of their time executing loops |
---|
| 174 | %acting on arrays. Efficient use of the large amount of hardware available |
---|
| 175 | %in the next generation of FPGA chips necessitates parallelism far beyond |
---|
| 176 | %what can be extracted from basic blocs only. |
---|
[31] | 177 | |
---|
[30] | 178 | %The Compsys team of LIP has built an automatic parallelizer, Syntol, which |
---|
| 179 | %handle restricted C programs -- the well known polyhedral model --, |
---|
| 180 | %computes dependences and build a symbolic schedule. The schedule is |
---|
| 181 | %a specification for a parallel program. The parallelism itself can be |
---|
| 182 | %expressed in several ways: as a system of threads, or as data-parallel |
---|
| 183 | %operations, or as a pipeline. In the context of the COACH project, one |
---|
| 184 | %of the task will be to decide which form of parallelism is best suited |
---|
| 185 | %to hardware, and how to convey the results of Syntol to the actual |
---|
| 186 | %synthesis tools. One of the advantages of this approach is that the |
---|
| 187 | %resulting degree of parallelism can be easilly controlled, e.g. by |
---|
| 188 | %adjusting the number of threads, as a mean of exploring the |
---|
| 189 | %area / performance tradeoff of the resulting design. |
---|
[31] | 190 | |
---|
[30] | 191 | %Another point is that potentially parallel programs necessarily involve |
---|
| 192 | %arrays: two operations which write to the same location must be executed |
---|
| 193 | %in sequence. In synthesis, arrays translate to memory. However, in FPGAs, |
---|
| 194 | %the amount of on-chip memory is limited, and access to an external memory |
---|
| 195 | %has a high time penalty. Hence the importance of reducing the size of |
---|
| 196 | %temporary arrays to the minimum necessary to support the requested degree |
---|
| 197 | %of parallelism. Compsys has developped a stand-alone tool, Bee, based |
---|
| 198 | %on research by A. Darte, F. Baray and C. Alias, which can be extended |
---|
| 199 | %into a memory optimizer for COACH. |
---|
| 200 | %} |
---|
[12] | 201 | |
---|
[30] | 202 | The problem of compiling sequential programs for parallel computers |
---|
| 203 | has been studied since the advent of the first parallel architectures |
---|
| 204 | in the 1970s. The basic approach consists in applying program transformations |
---|
| 205 | which exhibit or increase the potential parallelism, while guaranteeing |
---|
| 206 | the preservation of the program semantics. Most of these transformations |
---|
| 207 | just reorder the operations of the program; some of them modify its |
---|
| 208 | data structures. Dpendences (exact or conservative) are checked to guarantee |
---|
| 209 | the legality of the transformation. |
---|
| 210 | |
---|
| 211 | This has lead to the invention of many loop transformations (loop fusion, |
---|
| 212 | loop splitting, loop skewing, loop interchange, loop unrolling, ...) |
---|
| 213 | which interact in a complicated way. More recently, it has been noticed |
---|
| 214 | that all of these are just changes of basis in the iteration domain of |
---|
| 215 | the program. This has lead to the invention of the polyhedral model, in |
---|
| 216 | which the combination of two transformation is simply a matrix product. |
---|
| 217 | |
---|
| 218 | As a side effect, it has been observed that the polytope model is a useful |
---|
| 219 | tool for many other optimization, like memory reduction and locality |
---|
| 220 | improvement. Another point is |
---|
| 221 | that the polyhedral domain \emph{stricto sensu} applies only to |
---|
| 222 | very regular programs. Its extension to more general programs is |
---|
| 223 | an active research subject. |
---|
| 224 | |
---|
[12] | 225 | \subsubsection{Interfaces} |
---|
| 226 | \newcommand{\ip}{\sc ip} |
---|
| 227 | \newcommand{\dma}{\sc dma} |
---|
| 228 | \newcommand{\soc}{\sc SoC} |
---|
| 229 | \newcommand{\mwmr}{\sc mwmr} |
---|
| 230 | The hardware/software interface has been a difficult task since the advent |
---|
| 231 | of complex systems on chip. After the first Co-design |
---|
| 232 | environments~\cite{Coware,Polis,Ptolemy}, the Hardware Abstraction Layer |
---|
| 233 | has been defined so that software applications can be developed without low |
---|
| 234 | level hardware implementation details. In~\cite{jerraya}, Yoo and Jerraya |
---|
| 235 | propose an {\sc api} with extension ability instead of a unique hardware |
---|
| 236 | abstraction layer. System level communication frameworks have been |
---|
| 237 | introduced~\cite{JerrayaPetrot,mwmr}. |
---|
| 238 | \par |
---|
| 239 | A good abstraction of a hardware/software interface has been proposed |
---|
| 240 | in~\cite{Jantsch}: it is composed of a software driver, a {\dma} and and a |
---|
| 241 | bus interface circuit. Automatic wrapping between bus protocols has |
---|
| 242 | generated a lot of papers~\cite{Avnit,smith,Narayan, Alberto}. These works |
---|
| 243 | do not use a {\dma}. In COACH, the hardware/software interface is done at a |
---|
| 244 | higher level and uses burst communication in the bus interface circuit to |
---|
| 245 | improve the communication performances. |
---|
| 246 | \par |
---|
| 247 | There are two important projects related to efficient interface of |
---|
| 248 | data-flow {\ip}s : the work of Park and Diniz~\cite{ Park01} and the the |
---|
| 249 | Lip6 work on {\mwmr}~\cite{mwmr}. Park and Diniz~\cite{ Park01} proposed |
---|
| 250 | of a generic interface that can be parameterized to connect different |
---|
| 251 | data-flow {\ip}s. This approach does not request the communications to be |
---|
| 252 | statically known and proposes a runtime resolution to solve conflicting |
---|
| 253 | access to the bus. To our knowledge this approach has not been implemented |
---|
| 254 | further since 2003. |
---|
| 255 | \par |
---|
| 256 | {\mwmr}~\cite{mwmr} stands for both a computation model (multi-write, |
---|
| 257 | multi-read {\sc fifo}) inherited from the Khan Process Networks and a bus |
---|
| 258 | interface circuit protocol. As for the work of Park and Diniz, {\mwmr} |
---|
| 259 | does not make the assumption of a static communication flow. This implies |
---|
| 260 | simple software driver to write, but introduces additional complexity due |
---|
| 261 | to the mutual exclusion locks necessary to protect the shared memory. |
---|
| 262 | \par |
---|
| 263 | we propose, in COACH, to use recent work on hardware/software |
---|
| 264 | interface~\cite{FR-vlsi} that uses a {\em clever} {\dma} responsible for |
---|
| 265 | managing data streams. A assumption is that the behavior of the {\ip}s can |
---|
| 266 | be statically described. A similar choice has been made in the Faust |
---|
| 267 | {\soc}~\cite{FAUST} which includes the {\em smart memory engine} component. |
---|
| 268 | Jantsch and O'Nils already noticed in ~\cite{Jantsch} the huge complexity |
---|
| 269 | of writing this hardware/software interface, in COACH, automatic |
---|
| 270 | generation of the interface will be achieved, this is one goal of the CITI |
---|
| 271 | contribution to COACH. |
---|
| 272 | |
---|