Changes between Version 26 and Version 27 of rpc_implementation
- Timestamp:
- Apr 3, 2018, 7:21:35 PM (7 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
rpc_implementation
v26 v27 13 13 * For a given architecture, the physical address is therefore split in two fixed size fields : The '''LPA''' field (Local Physical Adress) contains the LSB bits, and defines the physical address inside a given cluster. The '''CXY''' field (Cluster Identifier Index) contains the MSB bits, and directly identifies the cluster. 14 14 * Each cluster can contain several cores (including 0), several peripherals, and a physical memory bank of any size (including 0 bytes). This is defined in the ''arch_info'' file. 15 * There is one kernel instance in each cluster containing at least one core, one interrupt contr ôler, and one physical memory bank.15 * There is one kernel instance in each cluster containing at least one core, one interrupt controler, and one physical memory bank. 16 16 17 17 == 2) Unique access point in cluster == 18 18 19 ALMOS-MKH replicates the KDATA segment (containing the kernel global variables) in all segments, usingthe same LPA (Local Physical Address) for the KDATA segment base.19 ALMOS-MKH replicates the KDATA segment (containing the kernel global variables) in all clusters, and uses the same LPA (Local Physical Address) for the KDATA segment base. 20 20 Therefore, in two different clusters, a given global variable, identified by its LPA can have different values. This feature is used by by ALMOS-MKH to allow a kernel instance in a cluster K to access the global variables of another kernel instance in cluster K', building a physical address by concatenation of a given LPA with the CXY cluster identifier for the target cluster K'. 21 21 22 ALMOS-MKH defines a software RPC_FIFO, implemented as a global variable . This RPC_FIFO, is a member of the "cluster_manager" structure, that is replicated in all clusters containing a kernel instance. It is used by client threads to register RPC requests to a given target cluster K.22 ALMOS-MKH defines a software RPC_FIFO, implemented as a global variable in the KDATA segment. This RPC_FIFO, is a member of the "cluster_manager" structure, that is replicated in all clusters containing a kernel instance. It is used by client threads to register RPC requests to a given target cluster K. 23 23 24 24 This RPC_FIFO has a large number (N) of writers, an a small number (M) of readers: 25 * N is the number of client threads (practically unbounded) , running in any cluster, that can send a RPC request to a giventarget cluster K. To synchronize these multiple client threads, the RPC FIFO implements a "ticket based" policy, defining a ''first arrived / first served'' priority to register a new request into FIFO.26 * M is the number of server threads , (bounded by the CONFIG_RPC_THREAD_MAX parameter), created in cluster K to handle the RPC requests. To synchronize these multiple server threads (called RPC threads), the RPC FIFO implements a "light lock", that is a non blocking lock: only one RPC thread at a given time can take the lock and become the FIFO owner. This FIFO owner is in charge of handling pending RPC requests. When the light lock is taken by RPC thread T, another RPC thread T' failing to take the lock simply deschedules.25 * N is the number of client threads (practically unbounded). A client thread can execute in any cluster, and can send a RPC request to any target cluster K. To synchronize these multiple client threads, the RPC FIFO implements a "ticket based" policy, defining a ''first arrived / first served'' priority to register a new request into FIFO. 26 * M is the number of server threads in a given cluster, (bounded by the CONFIG_RPC_THREAD_MAX parameter). The RPC server threads are dynamically created in cluster K to handle the RPC requests. To synchronize these multiple server threads (called RPC threads), the RPC FIFO implements a "light lock", that is a non blocking lock: only one RPC thread at a given time can take the lock and become the FIFO owner. This FIFO owner is in charge of handling pending RPC requests. When the light lock is taken by RPC thread T, another RPC thread T' failing to take the lock simply returns to IDLE state. 27 27 28 == 3) Client blocking policy ==28 == 3) Client blocking policy and synchronization == 29 29 30 All RPCs are blocking for the client thread. ALMOS-MKH implements a descheduling policy: the client thread blocks on the specific RPC_WAIT condition, and deschedules. 31 It is unblocked by the RPC thread, when the RPC is completed. 30 All RPCs are blocking for the client thread: the client thread register its request in the target RPC FIFO, blocks on the specific THREAD_BLOCKED_RPC condition, and deschedules. The client thread is unblocked by the RPC server thread, when the RPC is completed. 32 31 33 In order to reduce the RPC latency for the client thread, the RPC thread completing an RPC request send an IPI to force a scheduling on the core running the client thread. 34 If there is no other thread with an higher priority, the client thread can be immediately rescheduled 32 In order to reduce the RPC latency for the client thread, ALMOS-MKH use IPIs (Inter-Processor Interrupts). An IPI forces the target core to make a scheduling. The client thread select a core in the target cluster, and send an IPI to this selected server core. When no RPC thread is active on the target cluster, the selected core will activate (or create) a new RPC thread and execute it. When an RPC thread is already active, the IPI forces a scheduling point on the target core, but no new RPC thread is activated. 35 33 36 == 3) Parallel RPC requests handling == 34 To distribute the load associated to RPC handling on all cores in a given target cluster, ALMOS-MHH implement the following policy: The client thread select in target cluster the core that has the same local index as the core running the client thread in the client cluster. If this is not possible (when the number of cores in the server cluster is smaller than the 35 number of cores in the client cluster), ALMOS-MKH select the core 0. 36 37 When the RPC server thread completes, it uses a remote access to unblock the client thread, and send and IPI to the client core to force a scheduling. This reduces the RPC latency, because the RPC threads are kernel thread that have the highest scheduling priority. an IPI to the RPC thread completing an RPC request send an IPI to force a scheduling on the core running the client thread. 38 39 == 4) Parallel RPC requests handling == 37 40 38 41 Each cluster contains a pool of RPC threads, in charge of RPC requests handling. As the scheduler takes into account the thread type, it can take into account the RPC_FIFO state for the RPC threads. AN RPC thread is not runnable when the RPC_FIFO is empty. When the RPC_FIFO is not empty, only one RPC thread has the FIFO ownership and can consume RPC requests from the FIFO. … … 42 45 Therefore, it can exist N simultaneously active RPC threads: the running one is the FIFO owner, and the (N-1) others are blocked on a wait condition. 43 46 44 == 4) Placement of RPC threads on cores ==45 46 The client thread, after registering the RPC request in the target cluster FIFO, and before descheduling, send an IPI to a specific core in the client thread, and force this core to make a scheduling. The RPC threads being kernel threads, have an higher priority than the client threads that are generally user threads. Therefore this IPI will contribute to reduce the RPC latency for the client thread.47 48 To distribute the load associated to RPC handling on all cores in a given target cluster, ALMOS-MHH implement the following policy: The client thread select in target cluster the core that has the same local index as the core running the client thread in the client cluster.49 47 50 48 == 5) RPC request format == … … 72 70 * You must implement the marshaling function ''rpc_kernel_service_client()'', that is executed on the client side by the client thread, in the ''rpc.c'' and ''rpc.c'' files. This function (1) register the input arguments in the RPC descriptor, (2) register the RPC request in the target cluster RPC_FIFO, (3) extract the output arguments from the RPC descriptor. 73 71 * You must implement the marshaling function ''rpc_kernel_service_server()'', that is executed on the server side by the RPC thread, in the ''rpc.c'' and ''rpc.c'' files. This function (1) extract the input arguments from the RPC descriptor, (2) call the ''kernel_service()'' function, (3) register the outputs arguments in the RPC descriptor. 74