Changes between Version 58 and Version 59 of io_operations
- Timestamp:
- Jan 22, 2020, 4:03:48 PM (5 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
io_operations
v58 v59 21 21 == B) Generic Devices APIs == 22 22 23 To represent the peripherals, ALMOS-MKH uses device descriptors (implemented by the ''chdev_t'' structure). For multi-channels peripherals, ALMOS-MKH defines one ''chdev''per channel. This descriptor contains the functional index, the implementation index, the channel index, and the physical base address of the segment containing the addressable registers for this peripheral channel. It contains also the root of a waiting queue of pending commands registered by the various client threads.23 To represent the peripherals, ALMOS-MKH uses device descriptors. For multi-channels peripherals, ALMOS-MKH defines one ''chdev_t'' structure per channel. This descriptor contains the functional index, the implementation index, the channel index, and the physical base address of the segment containing the addressable registers for this peripheral channel. It contains also the root of a waiting queue of pending commands registered by the various client threads. 24 24 25 25 The generic ''chdev'' API is defined in the [https://www-soc.lip6.fr/trac/almos-mkh/browser/trunk/kernel/kern/chdev.h chdev.h] and [https://www-soc.lip6.fr/trac/almos-mkh/browser/trunk/kernel/kern/chdev.c chdev.c] files. 26 26 27 For each functional type, the device specific API defines the list of available commands, and the specific structure defining the command descriptor (containing the command type and arguments). As an IO operation is blocking for the calling thread, a client thread can only post one command at a given time. This command is registered in the client thread descriptor, to be passed to the hardware specific driver.27 For each functional type, a specific API defines the list of available commands, and the specific structure defining the command type and the arguments values. As an IO operation is blocking for the calling thread, a client thread can only post one command at a given time. This command is registered in the client thread descriptor, to be passed to the hardware specific driver. 28 28 29 29 The set of supported generic devices, and their associated APIs are defined below: … … 43 43 '''Internal peripherals''' are replicated in all clusters. In each cluster, the device descriptor is stored in the same cluster as the device itself. These device descriptors are shared resources: they are mostly accessed by the local kernel instance, but can also be accessed by threads running in another cluster. This the case for both the ICU and the MMC devices. 44 44 45 '''External peripherals''' are shared resources, accessed through the bridge located in the I/O cluster. To minimize contention, the corresponding device descriptors are distributed on all clusters, as uniformly as possible. Therefore, an I/O operation involves generally three clusters: the client cluster, the I/O cluster to access the external peripheral, and the server cluster containing the ''chdev'' descriptor. 45 '''External peripherals''' are shared resources, accessed through the bridge located in the I/O cluster. To minimize contention, the corresponding device descriptors are distributed on all clusters, as uniformly as possible. Therefore, an I/O operation involves generally three clusters: 46 * the '''client cluster''' contains the client thread. 47 * the '''I/O cluster''' contains the external peripheral. 48 * the '''server cluster''' contains the ''chdev'' descriptor. 46 49 47 The ''devices_directory_t'' structure contains extended pointers on all generic devices descriptors defined in themanycore architecture.50 The ''devices_directory_t'' structure contains extended pointers on all devices descriptors available in a given manycore architecture. 48 51 This structure is organized as a set of arrays: 49 52 * There is one entry per channel for each '''external peripheral''', and the corresponding array is indexed by the channel index. … … 54 57 == D) Waiting queue Management == 55 58 56 The commands waiting queue is implemented as a distributed XLIST, rooted in the devicedescriptor. To launch an I/O operation, a client thread, running in any cluster, calls a function of the device API. This function registers the command in the thread descriptor, and registers the thread in the waiting queue.59 The commands waiting queue is implemented as a distributed XLIST, rooted in the ''chdev'' descriptor. To launch an I/O operation, a client thread, running in any cluster, calls a function of the device API. This function registers the command in the thread descriptor, and registers the thread in the waiting queue. 57 60 58 For most I/O operations, ALMOS-MK implements a blocking policy: the thread calling a command function is blocked on the THREAD_BLOCKED_IO condition, and descheduled. It will be re-activated by the driver ISR (Interrupt Service Routine)signaling the completion of the I/O operation.61 For most I/O operations, ALMOS-MK implements a blocking policy: the thread calling a command function is blocked on the THREAD_BLOCKED_IO condition, and descheduled. It will be re-activated by the driver signaling the completion of the I/O operation. 59 62 60 63 The waiting queue is handled as a Multi-Writers / Single-Reader FIFO, protected by a remote_lock. The N writers are the clients threads, whose number is not bounded. 61 The single reader is the server thread associated to the device descriptor, and created at kernel initialization. This thread is in charge of consuming the pending commands from the waiting queue. When the queue is empty, the server thread blocks on the THREAD_BLOCKED_QUEUE condition, and is descheduled. It is activated by the client thread when a new command is registered in the queue.64 The single reader is the server thread associated to the device descriptor, and created at kernel initialization. This thread is in charge of consuming the pending commands from the waiting queue. When the queue is empty, the server thread blocks, because a DEV thread cannot be selected by the scheduler when the associated device queue is empty. 62 65 63 66 Finally, each generic device descriptor contains a link to the specific driver associated to the available hardware implementation. This link is established in the kernel initialization phase. … … 66 69 67 70 To start an I/O operation, the server thread associated to the device must call the specific driver corresponding to the hardware peripheral available in the architecture. 68 To signal the completion of a given I/O operation, the peripheral rises an IRQ to execute a specific ISR (Interrupt Service Routine).69 71 70 72 Any driver must therefore implement the three following functions: … … 72 74 '''driver_init()''' 73 75 74 This function initialises both the peripheral hardware registers, and the specific global variables defined by a given hardware implementation. It is called in the kernel initialization phase.76 This function initialises both the peripheral hardware registers, and the specific variables or data structures required by a given hardware implementation. It is called in the kernel initialization phase. 75 77 76 '''driver_cmd( xptr_t thread , device_t * device )78 '''driver_cmd( xptr_t thread_xp , device_t * device ) 77 79 78 80 This function is called by the server thread. It accesses to the peripheral hardware registers to start the I/O operation. Depending on the hardware peripheral implementation, it can be blocking or non-blocking for the server thread. 79 81 * It is blocking on the THREAD_BLOCKED_DEV_ISR condition, if the hardware peripheral supports only one simultaneous I/O operation. Examples are a simple disk controller, or a text terminal controller. The blocked server thread must be re-activated by the ISR signaling completion of the current I/O operation. 80 82 * It is non-blocking if the hardware peripheral supports several simultaneous I/O operations. Example is an AHCI compliant disk controller. It blocks only if the number of simultaneous I/O operations becomes larger than the max number of concurrent operations supported by the hardware. 81 The ''thread'' argument is the extended pointer on the client thread, containing the embedded command descriptor. The ''device'' argument 82 is the local pointer on the device descriptor. 83 The ''thread_xp'' argument is the extended pointer on the client thread, containing the embedded command descriptor. The ''device'' argument is the local pointer on the device descriptor. 83 84 84 '''driver_isr( xptr_tdevice )'''85 '''driver_isr( chdev_t * device )''' 85 86 86 This function is executed in the client cluster, on the core running the client thread. It accesses the peripheral hardware registers to get the I/O operation error status, acknowledge the IRQ, and unblock the client thread. 87 If the server thread has been blocked, it also unblocks the server thread. 88 The ''device'' argument is the extended pointer on the device descriptor. 87 This function is executed in the server cluster, by the core executing the server thread, because the IRQ signaling the completion of an I/O operation by a given peripheral channel, is statically linked to this core during the kernel initialization. It accesses the peripheral hardware registers to get the I/O operation error status, acknowledge the IRQ, and unblock the server thread. 88 The ''device'' argument is the pointer on the device descriptor. 89 89 90 == F) G lobal I/O operation scenario ==90 == F) General I/O operation scenario == 91 91 92 The I/O operation mechanism involves generally three clusters : client cluster / server cluster / IO cluster. It does not use any RPC, but uses only remote accesses to execute the three steps implied byany I/O operation:92 For an external peripheral, the I/O operation mechanism involves generally three clusters : client cluster / server cluster / I/O cluster. It does not use any RPC, but uses only remote accesses to execute the three steps of any I/O operation: 93 93 * To post a new command in the waiting queue of a given (remote) device descriptor, the client thread uses only few remote accesses to be registered in the distributed XLIST rooted in the server cluster. 94 94 * To launch the I/O operation on the (remote) peripheral, the server thread uses only remote accesses to the physical registers located in the I/O cluster. 95 * To complete the I/O operation, the ISR running on the client cluster accesses peripheral registers in the I/O cluster, reports the I/O operation status in the command descriptor, and unblocks the client and server threads, using remote accesses if required.95 * To complete the I/O operation, the ISR running on the server cluster accesses peripheral registers in the I/O cluster, reports the I/O operation status in the command descriptor located in client cluster, and unblocks the server thread. Finally the server thread, use a remote access to unblock the client thread. 96 96 97 97 == G) Interrupts Routing == … … 99 99 The completion of an I/O operation is signaled by the involved peripheral using an interrupt. In ALMOS-MKH, this interrupt is handled by the core running the server thread that launched the I/O operation. Therefore, the interrupt must be routed to the cluster containing the ''chdev'' involved in the I/O operation. 100 100 101 ALMOS-MKH makes the assumption that interrupt routing (from peripherals to cores) is done by a dedicated device, called '''PIC''' (Programmable Interrupt Controller). This device also helps the the kernel interrupt handler, running on the selected core, to select the relevant ISR (Interrupt Service Routine) to be executed.101 ALMOS-MKH makes the assumption that interrupt routing (from peripherals to cores) is done by a dedicated device, called '''PIC''' (Programmable Interrupt Controller). This device also helps the kernel interrupt handler, to select the relevant ISR (Interrupt Service Routine) to be executed. 102 102 103 This generic PIC device is supposed to beimplemented as a ''distributed'' hardware infrastructure containing two types of hardware components:103 This generic PIC device is generally implemented as a ''distributed'' hardware infrastructure containing two types of hardware components: 104 104 * The IOPIC component (one single component in I/O cluster) interfaces the externals IRQs (one IRQ per channel) to the PIC infrastructure. 105 105 * The LAPIC components (one component per cluster) interfaces the PIC infrastructure to the local cores in a given cluster.