Changes between Version 15 and Version 16 of kernel_locks
- Timestamp:
- Jul 17, 2015, 11:43:10 AM (9 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
kernel_locks
v15 v16 19 19 == Atomic access function == 20 20 21 === unsigned int '''_atomic_increment'''( unsigned int * shared , unsigned int increment )===22 This blocking function use aLL/SC to atomically increment a shared variable.21 === __unsigned int '''_atomic_increment'''( unsigned int * shared , unsigned int increment )__ === 22 This blocking function uses LL/SC to atomically increment a shared variable. 23 23 * '''shared''' : pointer on the shared variable 24 24 * '''increment''' : increment value 25 25 It returns the value of the shared variable before increment. 26 26 27 === __void '''_atomic_or'''( unsigned int *shared , unsigned int mask )__ === 28 This blocking function uses LL/SC to atomically set a bit field in a shared variable. 29 * '''shared''' : pointer on the shared variable. 30 * '''mask''' : *ptr <= *ptr | mask 31 32 === __void '''_atomic_and'''( unsigned int *shared , unsigned int mask )__ === 33 This blocking function uses LL/SC to atomically reset a bit field in a shared variable. 34 * '''shared''' : pointer on the shared variable. 35 * '''mask''' : *ptr <= *ptr & mask 36 27 37 == Simple lock access functions == 28 38 29 === void '''_simple_lock_acquire'''( simple_lock_t * lock )===39 === __void '''_simple_lock_acquire'''( simple_lock_t * lock )__ === 30 40 This blocking function does not implement any ordered allocation, and is not distributed. 31 41 It returns only when the lock as been granted. 32 42 33 === void '''_simple_lock_release'''( simple_lock_t * lock )===43 === __void '''_simple_lock_release'''( simple_lock_t * lock )__ === 34 44 This function releases the lock, and can be used for lock initialisation. 35 45 It must always be called after a successful _simple_lock_acquire(). … … 37 47 == Queuing Lock Access functions == 38 48 39 === void '''_spin_lock_init'''( spin_lock_t * lock )===49 === __void '''_spin_lock_init'''( spin_lock_t * lock )__ === 40 50 This function initializes the spin_lock ticket allocator. 41 51 42 === void '''_spin_lock_acquire'''( spin_lock_t * lock )===52 === __void '''_spin_lock_acquire'''( spin_lock_t * lock )__ === 43 53 This blocking function uses the atomic_increment() function, to implement a ticket allocator and provide ordered access to the lock. It returns only when the lock as been granted. 44 54 45 === void '''_spin_lock_release'''( spin_lock_t * lock )===55 === __void '''_spin_lock_release'''( spin_lock_t * lock )__ === 46 56 This function releases the lock, but cannot be used for lock initialisation. It must always be called after a successful _lock_acquire(). 47 57 48 58 == Distributed Lock Access functions == 49 59 50 === void '''_sbt_lock_init'''( sbt_lock_t* lock )===60 === __void '''_sqt_lock_init'''( sqt_lock_t* lock )__ === 51 61 This function allocates and initialises the SQT nodes, distributed on all clusters. It computes the smallest SQT covering all processors defined in the mapping. This function use the _remote_malloc() function, and the distributed kernel heap segments. 52 62 53 === void '''_sbt_lock_acquire'''( sbt_lock_t* lock )===54 This function tries to acquire the S BT lock: It tries to get each "partial" lock on the path from bottom to top, using an atomic LL/SC, and starting from bottom. It is blocking : it polls each "partial" lock until it can be taken, and returns only when all "partial" locks, at all levels have been obtained.63 === __void '''_sqt_lock_acquire'''( sqt_lock_t* lock )__ === 64 This function tries to acquire the SQT lock: It tries to get each "partial" lock on the path from bottom to top, using an atomic LL/SC, and starting from bottom. It is blocking : it polls each "partial" lock until it can be taken, and returns only when all "partial" locks, at all levels have been obtained. 55 65 56 === void '''_sbt_lock_release'''( sbt_lock_t* lock )===57 This function releases the S BT lock: It reset all "partial" locks on the path from bottom to top, using a normal write, and starting from bottom.66 === __void '''_sqt_lock_release'''( sqt_lock_t* lock )__ === 67 This function releases the SQT lock: It reset all "partial" locks on the path from bottom to top, using a normal write, and starting from bottom. 58 68 59