Version 7 (modified by 10 years ago) (diff) | ,
---|
GIET_VM / Boot Procedure
The system boot is done in three phases.
Phase 0 : Reset
This phase is executed in case of hard reset: all processors execute the reset code (also called preloader code) stored in the external ROM, but the work done depends on the processor global index. This reset code is generic. It is entirely defined by the target hardware architecture, and can be used to boot any operating system.
- Processor (0,0,0) load the GIET_VM boot-loader code from the external disk (or any other mass storage peripheral), to the physical memory of cluster(0,0).
- All other processors initialize their private interrupt controller, to be able to receive an inter-processor interrupt (WTI), and enter wait_state in low-power mode.
Phase 1 : Boot
In this this phase the GIET_VM boot-loader is executed by processor (0,0,0), while other processors are in wait state. The GIET-VM boot-loader is defined in the boot.c file. The main steps are the following:
void boot_pmem_init()
This function makes the physical memory allocators initialisation. The GIET VM uses two types of pages:
- BPP : Big Physical Page (2 Mbytes).
- SPP : Small Physical Page (4 Kbytes).
There is one SPP and one BPP allocator per cluster containing a physical memory bank. All the physical memory allocation is done by the boot-loader in the boot phase, and these memory allocators should not be used by the kernel in the execution phase.
void boot_ptabs_init()
This function makes the page table initialisation. There is one page table per user application (vspace) defined in the mapping. All these pages tables are packed in one segment (seg_ptab) occupying one single big page (2 Mbytes), but this PTAB segment is replicated in all clusters. As the kernel read-only segments (seg_kcode and seg_kinit) are replicated in all clusters to avoid contention, the content of the page tables depends on the cluster-coordinates: for the kernel code, a given virtual address is mapped to different physical addresses, depending on the cluster coordinates.
boot_schedulers init()
This function makes the schedulers initialisation, as specified in the mapping, respecting the following principles.
- There is one scheduler per processor.
- Any task defined in any application can be allocated to any processor.
- The allocation of task to processors is fully static (no task migration).
- One single processor cannot schedule more than 14 tasks.
- One scheduler occupies 8 Kbytes, and contains the contexts of all tasks allocated to the processor (256 bytes per task).
boot_peripherals_init()
This function makes the external peripherals and coprocessors initialisation.
boot_elf_load()
This function load into memory the kernel code (kernel.elf file), and the user code for all applications specified in the mapping.
Finally, processor(0,0,0) starts all other processors, using an inter-processor interrupt (WTI). Each processor initializes its own CP0_SCHED register, its own CP2_MODE register to activates its MMU, its own CP0_SR register to use the GIET_VM exception handler, and jumps to the kernel_init() function.
Phase 2 : Kernel
This phase is executed by all processors in parallel. All processors enter the same kernel_init() function. The main steps are the following:
- Step 1 : each processor get its scheduler virtual address from CP0_SCHED register and contributes to _schedulers[] array initialisation
- step 2 : each processor loops on all allocated tasks to build the _ptabs_vaddr[] & _ptabs_ptprs[] arrays from the tasks contexts.
- step 3 : each processor computes and set the XCU masks, as specified in the HWI, PTI, and WTI interrupt vectors.
- step 4 : each processor starts its TICK timer if it has at least at least one task allocated
- step 5 : each processor updates the idle_task context (CTX_SP, CTX_RA, CTX_EPC).
- step 6 : when all processors reach the synchronisation barrier, each processor set registers SP, SR, PTPR, EPC, with the values corresponding to the first allocated task, and jump to user code