Our project covers several critical domains in system design in order to achieve high performance computing. Starting from a high level description we aim at generating automatically both hardware and software components of the system. \subsubsection{High Performance Computing} Accelerating high-performance computing (HPC) applications with field-programmable gate arrays (FPGAs) can potentially improve performance. However, using FPGAs presents significant challenges~\cite{hpc06a}. First, the operating frequency of an FPGA is low compared to a high-end microprocessor. Second, based on Amdahl law, HPC/FPGA application performance is unusually sensitive to the implementation quality~\cite{hpc06b}. Finally, High-performance computing programmers are a highly sophisticated but scarce resource. Such programmers are expected to readily use new technology but lack the time to learn a completely new skill such as logic design~\cite{hpc07a} . \\ HPC/FPGA hardware is only now emerging and in early commercial stages, but these techniques have not yet caught up. Thus, much effort is required to develop design tools that translate high level language programs to FPGA configurations. \subsubsection{System Synthesis} Today, several solutions for system design are proposed and commercialized. The most common are those provided by Altera and Xilinx to promote their FPGA devices. \\ The Xilinx System Generator for DSP~\cite{system-generateur-for-dsp} is a plug-in to Simulink that enables designers to develop high-performance DSP systems for Xilinx FPGAs. Designers can design and simulate a system using MATLAB and Simulink. The tool will then automatically generate synthesizable Hardware Description Language (HDL) code mapped to Xilinx pre-optimized algorithms. However, this tool targets only DSP based algorithms, Xilinx FPGAs and cannot handle complete SoC. Thus, it is not really a system synthesis tool. \\ In the opposite, SOPC Builder~\cite{spoc-builder} allows to describe a system, to synthesis it, to programm it into a target FPGA and to upload a software application. % FIXME(C2H from Altera, marche vite mais ressource monstrueuse) Nevertheless, SOPC Builder does not provide any facilities to synthesize coprocessors. System Designer must provide the synthesizable description with the feasible bus interface. \\ In addition, Xilinx System Generator and SOPC Builder are closed world since each one imposes their own IPs which are not interchangeable. We can conclude that the existing commercial or free tools does not coverthe whole system synthesis process in a full automatic way. Moreover, they are bound to a particular device family and to IPs library. \subsubsection{High Level Synthesis} High Level Synthesis translates a sequential algorithmic description and a constraints set (area, power, frequency, ...) to a micro-architecture at Register Transfer Level (RTL). Several academic and commercial tools are today available. Most common tools are SPARK~\cite{spark04}, GAUT~\cite{gaut08}, UGH~\cite{ugh08} in the academic world and CATAPULTC~\cite{catapult-c}, PICO~\cite{pico} and CYNTHETIZER~\cite{cynthetizer} in commercial world. Despite their maturity, their usage is restrained by: \begin{itemize} \item They do not respect accurately the frequency constraint when they target an FPGA device. Their error is about 10 percent. This is annoying when the generated component is integrated in a SoC since it will slow down the hole system. \item These tools take into account only one or few constraints simultaneously while realistic designs are multi-constrained. Moreover, low power consumption constraint is mandatory for embedded systems. However, it is not yet well handled by common synthesis tools. \item The parallelism is extracted from initial algorithm. To get more parallelism or to reduce the amout of required memory, the user must re-write it while there is techniques as polyedric transformations to increase the intrinsec parallelism. \item Despite they have the same input language (C/C++), they are sensitive to the style in which the algorithm is written. Consequently, engineering work is required to swap from a tool to another. \item The HLS tools are not integrated into an architecture and system exploration tool. Thus, a designer who needs to accelerate a software part of the system, must adapt it manually to the HLS input dialect and performs engineering work to exploit the synthesis result at the system level. \end{itemize} Regarding these limitations, it is necessary to create a new tool generation reducing the gap between the specification of an heterogenous system and its hardware implementation. \subsubsection{Application Specific Instruction Processors} ASIP (Application-Specific Instruction-Set Processor) are programmable processors in which both the instruction and the micro architecture have been tailored to a given application domain (eg. video processing), or to a specific application. This specialization usually offers a good compromise between performance (w.r.t a pure software implementation on an embeded CPU) and flexibility (w.r.t an application specific hardware co-processor). In spite of their obvious advantages, using/designing ASIPs remains a difficult task, since it involves designing both a micro-architecture and a compiler for this architecture. Besides, to our knowledge, there is still no available open-source design flow\footnote{There are commercial tools such a } for ASIP design even if such a tool would be valuable in the context of a System Level design exploration tool. \par In this context, ASIP design based on Instruction Set Extensions (ISEs) has received a lot of interest~\cite{NIOS2,ST70}, as it makes micro architecture synthesis more tractable \footnote{ISEs rely on a template micro-architecture in which only a small fraction of the architecture has to be specialized}, and help ASIP designers to focus on compilers, for which there are still many open problems\cite{CODES04,FPGA08}. This approach however has a strong weakness, since it also significantly reduces opportunities for achieving good seedups (most speedup remain between 1.5x and 2.5x), since ISEs performance is generally tied down by I/O constraints as they generally rely on the main CPU register file to access data. % ( %automaticcaly extraction ISE candidates for application code \cite{CODES04}, %performing efficient instruction selection and/or storage resource (register) %allocation \cite{FPGA08}). To cope with this issue, recent approaches~\cite{DAC09,DAC08} advocate the use of micro-architectural ISE models in which the coupling between the processor micro-architecture and the ISE component is thightened up so as to allow the ISE to overcome the register I/O limitations, however these approaches tackle the problem for a compiler/simulation point of view and not address the problem of generating synthesizable representations for these models. We therefore strongly believe that there is a need for an open-framework which would allow researchers and system designers to : \begin{itemize} \item Explore the various level of interactions between the original CPU micro-architecure and its extension (for example throught a Domain Specific Language targeted at micro-architecture specification and synthesis). \item Retarget the compiler instruction-selection (or prototype nex passes) passes so as to be able to take advantage of this ISEs. \item Provide a complete System-level Integration for using ASIP as SoC building blocks (integration with application specific blocks, MPSoc, etc.) \end{itemize} \subsubsection{Automatic Parallelization} % FIXME:LIP FIXME:PF FIXME:CA % Paul je ne suis pas sur que ce soit vraiment un etat de l'art % Christophe, ce que tu m'avais envoye se trouve dans obsolete/body.tex \mustbecompleted{ Hardware is inherently parallel. On the other hand, high level languages, like C or Fortran, are abstractions of the processors of the 1970s, and hence are sequential. One of the aims of an HLS tool is therefore to extract hidden parallelism from the source program, and to infer enough hardaware operators for its efficient exploitation. \\ Present day HLS tools search for parallelism in linear pieces of code acting only on scalars -- the so-called basic blocs. On the other hand, it is well known that most programs, especially in the fields of signal processing and image processing, spend most of their time executing loops acting on arrays. Efficient use of the large amount of hardware available in the next generation of FPGA chips necessitates parallelism far beyond what can be extracted from basic blocs only. \\ The Compsys team of LIP has built an automatic parallelizer, Syntol, which handle restricted C programs -- the well known polyhedral model --, computes dependences and build a symbolic schedule. The schedule is a specification for a parallel program. The parallelism itself can be expressed in several ways: as a system of threads, or as data-parallel operations, or as a pipeline. In the context of the COACH project, one of the task will be to decide which form of parallelism is best suited to hardware, and how to convey the results of Syntol to the actual synthesis tools. One of the advantages of this approach is that the resulting degree of parallelism can be easilly controlled, e.g. by adjusting the number of threads, as a mean of exploring the area / performance tradeoff of the resulting design. \\ Another point is that potentially parallel programs necessarily involve arrays: two operations which write to the same location must be executed in sequence. In synthesis, arrays translate to memory. However, in FPGAs, the amount of on-chip memory is limited, and access to an external memory has a high time penalty. Hence the importance of reducing the size of temporary arrays to the minimum necessary to support the requested degree of parallelism. Compsys has developped a stand-alone tool, Bee, based on research by A. Darte, F. Baray and C. Alias, which can be extended into a memory optimizer for COACH. } \subsubsection{Interfaces} \newcommand{\ip}{\sc ip} \newcommand{\dma}{\sc dma} \newcommand{\soc}{\sc SoC} \newcommand{\mwmr}{\sc mwmr} The hardware/software interface has been a difficult task since the advent of complex systems on chip. After the first Co-design environments~\cite{Coware,Polis,Ptolemy}, the Hardware Abstraction Layer has been defined so that software applications can be developed without low level hardware implementation details. In~\cite{jerraya}, Yoo and Jerraya propose an {\sc api} with extension ability instead of a unique hardware abstraction layer. System level communication frameworks have been introduced~\cite{JerrayaPetrot,mwmr}. \par A good abstraction of a hardware/software interface has been proposed in~\cite{Jantsch}: it is composed of a software driver, a {\dma} and and a bus interface circuit. Automatic wrapping between bus protocols has generated a lot of papers~\cite{Avnit,smith,Narayan, Alberto}. These works do not use a {\dma}. In COACH, the hardware/software interface is done at a higher level and uses burst communication in the bus interface circuit to improve the communication performances. \par There are two important projects related to efficient interface of data-flow {\ip}s : the work of Park and Diniz~\cite{ Park01} and the the Lip6 work on {\mwmr}~\cite{mwmr}. Park and Diniz~\cite{ Park01} proposed of a generic interface that can be parameterized to connect different data-flow {\ip}s. This approach does not request the communications to be statically known and proposes a runtime resolution to solve conflicting access to the bus. To our knowledge this approach has not been implemented further since 2003. \par {\mwmr}~\cite{mwmr} stands for both a computation model (multi-write, multi-read {\sc fifo}) inherited from the Khan Process Networks and a bus interface circuit protocol. As for the work of Park and Diniz, {\mwmr} does not make the assumption of a static communication flow. This implies simple software driver to write, but introduces additional complexity due to the mutual exclusion locks necessary to protect the shared memory. \par we propose, in COACH, to use recent work on hardware/software interface~\cite{FR-vlsi} that uses a {\em clever} {\dma} responsible for managing data streams. A assumption is that the behavior of the {\ip}s can be statically described. A similar choice has been made in the Faust {\soc}~\cite{FAUST} which includes the {\em smart memory engine} component. Jantsch and O'Nils already noticed in ~\cite{Jantsch} the huge complexity of writing this hardware/software interface, in COACH, automatic generation of the interface will be achieved, this is one goal of the CITI contribution to COACH.