wiki:mapping_info

Version 24 (modified by alain, 10 years ago) (diff)

--

GIET_VM / Mapping

The GIET_VM bootloader loads the GIET_VM kernel and the user application(s) on the target architecture. All user applications software objects, the kernel code and the critical kernel structures (such as the page tables or the processors schedulers) are statically build by the GIET_VM bootloader, as specified by the mapping directives.

The main goal of this static approach is to allow the system designer to control the placement of the tasks on the processors, but also to control the placement of software objects on the distributed physical memory banks. It supports replication of (read-only) critical objects such as kernel code, user code, or page tables. The page tables are statically initialized in the boot phase, and are not modified anymore in the execution phase.

To define the mapping, the system designer must provide a map.bin file containing a dedicated C binary data structure, that is loaded in memory by the bootloader.

The next section describes this C binary structure. The following sections describe how this binary file can be generated by the genmap tool from Python scripts. The genmap tool generates also a readable map.xml representation of the map.bin file.

Mapping content

The mapping contains the following informations:

  1. The mapping contains a description of the target, clusterized, hardware architecture, with the following constraints:
    • All processor cores are identical (MIPS32).
    • The clusters form a 2D mesh topology. The mesh size is defined by the (X_SIZE,Y_SIZE) parameters.
    • The number of processors per cluster is defined by the NPROCS parameters.variable.
    • The number of physical memory banks is variable (typically one physical memory bank per cluster).
    • Most peripherals are external and localized in one specific I/O cluster.
    • A small number of peripherals (such as the XCU interrupt controller) are internal and replicated in each cluster containing processors.
    • The physical address is the concatenation of 3 fields: the LSB field (32 bits) define a 4 Gbits physical address space inside a single cluster. The X and Y MSB fields (up to 4 bits for each field) define the cluster coordinates.
  1. the mapping contains a description of the GIET_VM kernel software objects (called virtual segments or vsegs):
    • The kernel code is replicated in all clusters. Each copy is a vseg.
    • There is one page table for each user application, and this page table is replicated in each cluster. Each copy is a vseg.
    • The kernel heap is distributed in all clusters. Each heap section is a vseg.
    • Finally there is a specific vseg for each peripheral (both internal and external), containing the peripheral addressable registers.

All these vsegs being accessed by all user applications must be defined in all virtual spaces, and are mapped in all page tables. They are called globa' vsegs.

  1. The mapping contains a description of the user applications to be launched on the platform. An user application is characterized by a a virtual address space, called a vspace. An user application can be multi-threaded, and the number of parallel tasks sharing the same address space in a given application is variable (can be one). Each task must be statically placed on a processor P(x,y,p). Moreover, each application defines a variable number of vsegs:
    • The application code can be defined as a single vseg, placed in a single cluster. It can also be replicated in all clusters, with one vseg per cluster.
    • There is one stack for each application task. There is one vseg per stack, and each stack vseg must be placed in a specific cluster(x,y).
    • The data vseg contains the global (shared) variables. It is not replicated, and must be placed in a single cluster.
    • The user heap can be physically distributed on all clusters and it can exist one heap vseg per cluster.as many vsegs.

C mapping data structure

The C binary mapping data structure is defined in the mapping_info.h file, and is organised as the concatenation of a fixed size header, and 10 variable size arrays:

mapping_cluster_t a cluster contains psegs, processors, peripherals and coprocessors
mapping_pseg_t a physical segment defined by a name, a base address ans a size (bytes)
mapping_vspace_t a virtual space contains several vsegs and several parallel tasks
mapping_vseg_t a virtual segment contains one software object
mapping_task_t a task must be statically associated to a processor
mapping_proc_t a processor can contain IRQ inputs (for XCU or PIC peripherals)
mapping_irq_t a source interrupt
mapping_coproc_t a coprocessor contains several cp_ports
mapping_cp_port_t a coprocessor port

The map.bin file must be stored on disk and will be loaded in memory by the GIET_VM bootloader in the seg_boot_mapping segment.

Python mapping description

A specific mapping requires at least two python files:

  • The arch.py file is attached to a given hardware architecture. It describes both the (possibly generic) hardware architecture, and the mapping of the kernel software objects on this hardware architecture.
  • The appli.py file is attached to a given user application. It describes the application structure (tasks and communication channels), and the mapping of the application tasks and software objects on the architecture.

The various Python Classes used by these these files are defined in the mapping.py file.

Python hardware architecture description

The target hardware architecture must be defined in the arch.py file , you must use the following constructors:

1. mapping

The Mapping( ) constructor build a mapping object and define the target architecture general parameters:

name mapping name == architecture name
x_size number of clusters in a row of the 2D mesh
y_size number of clusters in a column of the 2D mesh
nprocs max number of processors per cluster
x_width number of bits to encode X coordinate in paddr
y_width number of bits to encode Y coordinate in paddr
p_width number of bits to encode local processor index
paddr_width number of bits in physical address
coherence Boolean true if hardware cache coherence
irq_per_proc number of IRQ lines between XCU and one proc (GIET_VM use only one)
use_ramdisk Boolean true if the architecture contains a RamDisk?
x_io io_cluster X coordinate
y_io io_cluster Y coordinate
peri_increment virtual address increment for peripherals replicated in all clusters
reset_address physical base address of the ROM containing the preloader code
ram_base physical memory bank base address in cluster [0,0]
ram_size physical memory bank size in one cluster (bytes)

2. Processor core

The mapping.addProc( ) construct adds one MIPS32 processor core in a cluster (number of processor cores can different in different clusters). It has the following arguments:

x cluster x coordinate
y physical
p physical memory bank size (bytes)

The global processor index is : ( ( x << y_width ) + y ) << p_width ) + p

3. Physical memory bank

The mapping.addRam( ) construct adds one physical memory bank, and the associated physical segment in a cluster. It has the following arguments:

name segment name
base physical memory bank base address
size physical memory bank size (bytes)

The target cluster coordinates (x,y) are implicitely defined by the base address MSB bits.

4. Physical peripheral

The mapping.addPeriph( ) construct adds one peripheral, and the associated physical segment in a cluster. It has the following arguments:

name segment name
base peripheral segment physical base address
size peripheral segment size (bytes)
ptype Peripheral type
subtype Peripheral subtype
channels number of channels for multi-channels peripherals
arg optionnal argument depending on peripheral type

The target cluster coordinates (x,y) are defined by the base address MSB bits. The supported peripheral types and subtypes are defined in the mapping.py file. The optionnal argument has the following semantic:

  • for XCU : max number of WTI = max number of HWI = max number of PTI
  • for FBF : number of lines = number of pixels per line

5. Interrupt line

The mapping.addIrq() construct adds one IRQ line input to an XCU peripheral, or to a PIC peripheral. It has the following arguments:

periph peripheral receiving the IRQ line
index input port index
isrtype Interrupt Service Routine type
channel channel index for multi-channel ISR

The supported ISR types are defined in the mapping.py file.

Python kernel mapping

The mapping of the GIET_VM vsegs must be defined in the arch.py file.

Each kernel virtual segment has the global attribute, and must be mapped in all vspaces. It can be mapped to a set of consecutive small pages (4 Kbytes), or to a set of consecutives big pages (2 Mbytes).

The mapping.addGlobal() construct define the mapping for one kernel vseg. It has the following arguments:

name virtual segment name
vbase virtual base address
size segment size (bytes
mode access rights (CXWU)
vtype vseg type
x destination cluster X coordinate
y destination cluster Y coordinate
pseg destination pseg name
identity identity mapping required (default = False)
binpath pathname for binary file if required (default = ' ')
align alignment constraint if required (default = 0)
local only mapped in local page table if true (default = False)
big to be mapped in big pages (default = False)

The supported values for the mode argument, and for the vtype arguments are defined in the mapping.py file.

The (x, y, pseg) arguments define actually the vseg placement.

1. Boot vsegs

There is 4 global vsegs for the GIET_VM bootloader:

  • The seg_boot_mapping vseg contains the C binary structure defining the mapping. It is loaded from disk by the boot-loader.
  • The seg_boot_code vseg contains the boot-loader code. It is loaded from disk by the preloader.
  • The seg_boot_data vseg contains the boot-loader global data.
  • The seg_boot_stacks vseg contains the stacks for all processors.

These 4 vsegs must be identity mapping (because the page table are not available), and are mapped in the first big physical page (2 Mbytes) in cluster [0][0].

2. Kernel vsegs

Most kernel vsegs are replicated or distributed in all clusters, to improve locality and minimize contention, as explained below:

  • The seg_kernel_ptab_x_y vsegs have type PTAB. They contains the page tables for all vspaces (one page table per vspace). There is one such vseg in each cluster (one set of page tables per cluster). Each PTAB vseg is mapped in one big physical page.
  • The seg_kernel_code & seg_kernel_init have type ELF. They contain the kernel code. These two vsegs are mapped in one big physical page. They are replicated in each cluster. The local attribute must be set, because the same virtual address will be mapped on different physical address depending on the cluster.
  • The seg_kernel_data has type ELF, and contains the kernel global data. They are not replicated, and is mapped in cluster[0][0].
  • The seg_kernel_sched_x_y vseg have type SCHED. It contains the processor schedulers (one scheduler per processor). There is one such vseg in each cluster, and it must be mapped on small pages (two small pages per scheduler).
  • The seg_kernel_heap_x_y vseg have type HEAP, and contain the distributed kernel heap. There is one HEAP vseg per cluster, and is mapped in at least one big page.

3. Peripheral vsegs

A global vseg must be defined for each addressable peripheral. As a general rule, we use big physical page(s) for each external peripheral, and one small physical page for each replicated peripheral.

Python user application mapping

The mapping of a given application must be defined in the application.py file.

A vspace, containing a variable number of tasks, and a variable number of vsegs, must be defined for each application.

There is several types of user vseg

  • The code vsegs must have type ELF. They can be (optionally) replicated in all clusters.
  • The data vseg must have type ELF. It is not replicated. It must contain the start_vector defining the entry points of the application tasks.
  • It must exist as many stack' vseg as the number of tasks. They have type BUFFER, and should be placed in the cluster containing the processor running the task.
  • Zero, one or several heap vseg(s), can be used by the malloc user library. They must have type HEAP.
  • One or several mwmr vseg(s) can be used by the mwmr user library. They must have the BUFFER type.

1. create the vspace

The mapping.addvspace( ) construct define a vspace. It has the following arguments:

name vspace name == application name
startname name of vseg containing the start_vector

2. vseg mapping

The mapping.addVseg( ) construct define the mapping of a vseg in the vspace. It has the following arguments:

vspace vspace containing the vseg
name vseg name
vbase virtual base address
size vseg size (bytes)
mode access rights (CXWU)
 vtype vseg type
x destination cluster X coordinate
y destination cluster Y coordinate
pseg destination pseg name
binpath pathname for binary file if required (default = ' ')
align alignment constraint if required (default = 0)
local only mapped in local page table if true (default = False)
big to be mapped in big pages (default = False)

The supported values for the mode argument, and for the vtype arguments are defined in the mapping.py file.

The (x, y, pseg')' arguments define actually the vseg placement.

3. task mapping

The mapping.addVseg( ) construct define the mapping of a task in the vspace. It has the following arguments:

vspace vspace containing the task
name task name (unique in vspace)
trdid thread index (unique in vspace]
x destination cluster X coordinate
y destination cluster Y coordinate
lpid destination processor local index
stackname name of vseg containing the task stack
heapname name of vseg containing the task heap
startid index in start vector (defining the task entry point virtual address)

The x, y, lpid arguments define actually the task placement.