Changes between Version 11 and Version 12 of kernel_locks
- Timestamp:
- Dec 6, 2014, 3:54:13 PM (10 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
kernel_locks
v11 v12 9 9 1. The '''simple_lock_t''' implements a non-distributed spin-lock without waiting queue. 10 10 11 2. The '''spin_lock_t''' implements a spin-lock with a waiting queue (based on a ticket allocator scheme), to enforce fairness and avoid live-lock situations. The GIET_LOCK_MAX_TICKET define the wrapping value for the ticket allocator. This lock 12 must be initialised. 11 2. The '''spin_lock_t''' implements a spin-lock with a waiting queue (based on a ticket allocator scheme), to enforce fairness and avoid live-lock situations. The GIET_LOCK_MAX_TICKET define the wrapping value for the ticket allocator. This lock must be initialised. 13 12 14 3. The '''sbt_lock_t''' spin-lock can be used when a single lock protect a unique resource that can be used by a large number of tasks running on a 2D mesh clusterised architecture. The lock is implemented as a Sliced Binary Tree of ''partial'' locks distributed on all cluster, and is intended to avoid contention on a single cluster when all tasks try to access the same resource.13 3. The '''sbt_lock_t''' spin-lock can be used when a single lock protect a unique resource shared by a large number of tasks running on a 2D mesh clusterised architecture. The lock is implemented as a Sliced Binary Tree of ''partial'' locks distributed on all cluster, and is intended to avoid contention on a single cluster when all tasks try to access the same resource. 15 14 16 15 All the lock access functions are prefixed by "_" to remind that they can only be executed by a processor in kernel mode. … … 28 27 29 28 === void '''_simple_lock_acquire'''( simple_lock_t * lock ) === 30 This blocking function does not implement any ordered allocation, and is not distributed. It returns only when the lock as been granted. 29 This blocking function does not implement any ordered allocation, and is not distributed. 30 It returns only when the lock as been granted. 31 31 32 32 === void '''_simple_lock_release'''( simple_lock_t * lock ) === 33 This function releases the lock, and can be used for lock initialisation. It must always be called after a successful _simple_lock_acquire(). 33 This function releases the lock, and can be used for lock initialisation. 34 It must always be called after a successful _simple_lock_acquire(). 34 35 35 36 == Queuing Lock Access functions == … … 46 47 == Distributed Lock Access functions == 47 48 48 === void '''sbt_lock_init'''( sbt_lock_t* lock, unsigned int ntasks ) === 49 === void '''_sbt_lock_init'''( sbt_lock_t* lock ) === 50 This function allocates and initialises the Sliced Binary Tree, distributed on all clusters. The X_SIZE and Y_SIZE parameters defined in the ''hard_config.h'' file must be power of 2, with X_SIZE = Y_SIZE or X_SIZE = 2 * Y_SIZE. This function use the _remote_malloc() function, and the distributed kernel heap segments. 49 51 50 === void '''sbt_lock_acquire'''( sbt_lock_t* lock ) === 52 === void '''_sbt_lock_acquire'''( sbt_lock_t* lock ) === 53 This function tries to acquire the SBT lock: It tries to get each "partial" lock on the path from bottom to top, using an atomic LL/SC, and starting from bottom. It is blocking : it polls each "partial" lock until it can be taken, and returns only when all "partial" locks, at all levels have been obtained. 51 54 52 === void '''sbt_lock_release'''( sbt_lock_t* lock ) === 55 === void '''_sbt_lock_release'''( sbt_lock_t* lock ) === 56 This function releases the SBT lock: It reset all "partial" locks on the path from bottom to top, using a normal write, and starting from bottom. 53 57 54 58