Changeset 624 for trunk/kernel/libk


Ignore:
Timestamp:
Mar 12, 2019, 1:37:38 PM (6 years ago)
Author:
alain
Message:

Fix several bugs to use the instruction MMU in kernel mode
in replacement of the instruction address extension register,
and remove the "kentry" segment.

This version is running on the tsar_generic_iob" platform.

One interesting bug: the cp0_ebase defining the kernel entry point
(for interrupts, exceptions and syscalls) must be initialized
early in kernel_init(), because the VFS initialisation done by
kernel_ini() uses RPCs, and RPCs uses Inter-Processor-Interrup.

Location:
trunk/kernel/libk
Files:
4 edited

Legend:

Unmodified
Added
Removed
  • trunk/kernel/libk/busylock.c

    r600 r624  
    22 * busylock.c - local kernel-busy waiting lock implementation.
    33 *
    4  * Authors     Alain Greiner (2016,2017,2018)
     4 * Authors     Alain Greiner (2016,2017,2018,2019)
    55 *
    66 * Copyright (c) UPMC Sorbonne Universites
     
    7676
    7777#if DEBUG_BUSYLOCK
    78 if( (lock->type != LOCK_CHDEV_TXT0) &&
    79     ((uint32_t)hal_get_cycles() > DEBUG_BUSYLOCK) )
     78if( lock->type != LOCK_CHDEV_TXT0 )
    8079{
     80    // update thread list of busylocks
    8181    xptr_t root_xp = XPTR( local_cxy , &this->busylocks_root );
    82 
    83     // update thread list of busylocks
    8482    xlist_add_last( root_xp , XPTR( local_cxy , &lock->xlist ) );
    8583}
    8684#endif
    8785
    88 #if( DEBUG_BUSYLOCK && DEBUG_BUSYLOCK_THREAD_XP )
     86#if( DEBUG_BUSYLOCK & 1 )
    8987if( (lock->type != LOCK_CHDEV_TXT0) &&
    90     (XPTR( local_cxy , this ) == DEBUG_BUSYLOCK_THREAD_XP) )
     88    (this->process->pid == DEBUG_BUSYLOCK_PID) &&
     89    (this->trdid == DEBUG_BUSYLOCK_TRDID) )
    9190{
    92     // get cluster and local pointer of target thread
    93     cxy_t      thread_cxy = GET_CXY( DEBUG_BUSYLOCK_THREAD_XP );
    94     thread_t * thread_ptr = GET_PTR( DEBUG_BUSYLOCK_THREAD_XP );
    95 
    96     // display message on kernel TXT0
    9791    printk("\n[%s] thread[%x,%x] ACQUIRE lock %s\n",
    9892    __FUNCTION__, this->process->pid, this->trdid, lock_type_str[lock->type] );
     
    120114
    121115#if DEBUG_BUSYLOCK
    122 if( (lock->type != LOCK_CHDEV_TXT0) &&
    123     ((uint32_t)hal_get_cycles() > DEBUG_BUSYLOCK) )
     116if( lock->type != LOCK_CHDEV_TXT0 )
    124117{
    125118    // remove lock from thread list of busylocks
     
    128121#endif
    129122
    130 #if( DEBUG_BUSYLOCK && DEBUG_BUSYLOCK_THREAD_XP )
     123#if( DEBUG_BUSYLOCK & 1 )
    131124if( (lock->type != LOCK_CHDEV_TXT0) &&
    132     (XPTR( local_cxy , this ) == DEBUG_BUSYLOCK_THREAD_XP) )
     125    (this->process->pid == DEBUG_BUSYLOCK_PID) &&
     126    (this->trdid == DEBUG_BUSYLOCK_TRDID) )
    133127{
    134     // get cluster and local pointer of target thread
    135     cxy_t      thread_cxy = GET_CXY( DEBUG_BUSYLOCK_THREAD_XP );
    136     thread_t * thread_ptr = GET_PTR( DEBUG_BUSYLOCK_THREAD_XP );
    137 
    138     // display message on kernel TXT0
    139128    printk("\n[%s] thread[%x,%x] RELEASE lock %s\n",
    140129    __FUNCTION__, this->process->pid, this->trdid, lock_type_str[lock->type] );
  • trunk/kernel/libk/busylock.h

    r623 r624  
    22 * busylock.h: local kernel busy-waiting lock definition.     
    33 *
    4  * Authors  Alain Greiner (2016,2017,2018)
     4 * Authors  Alain Greiner (2016,2017,2018,2019)
    55 *
    66 * Copyright (c) UPMC Sorbonne Universites
     
    3434 * a shared object located in a given cluster, made by thread(s) running in same cluster.
    3535 * It uses a busy waiting policy when the lock is taken by another thread, and should
    36  * be used to execute very short actions, such as accessing basic allocators, or higher
     36 * be used to execute very short actions, such as accessing basic allocators or higher
    3737 * level synchronisation objects (barriers, queuelocks, or rwlocks).
    38  * WARNING: a thread cannot yield when it is owning a busylock.
    3938 *
    4039 * - To acquire the lock, we use a ticket policy to avoid starvation: the calling thread
     
    4241 *   value  until current == ticket.
    4342 *
    44  * - To release the lock, the owner thread increments the "current" value,
    45  *   decrements its busylocks counter.
     43 * - To release the lock, the owner thread increments the "current" value.
    4644 *
    47  * - When a thread takes a busylock, it enters a critical section: the busylock_acquire()
     45 * - When a thread takes a busylock, it enters a critical section: the acquire()
    4846 *   function disables the IRQs, takes the lock, increments the thread busylocks counter,
    49  *   and save the SR in lock descriptor and returns.
     47 *   save the SR in lock descriptor and returns.
    5048 *
    51  * - The busylock_release() function releases the lock, decrements the thread busylock
    52  *   counter, restores the SR to exit the critical section, and returns
     49 * - The release() function releases the lock, decrements the thread busylock
     50 *   counter, restores the SR to exit the critical section, and returns.
    5351 *
    54  * - If a thread owning a busylock (local or remote) tries to deschedule, the scheduler
    55  *   signals a kernel panic.
     52 * WARNING: a thread cannot yield when it is holding a busylock (local or remote).
     53 *
     54 * This rule is checked by all functions containing a thread_yield() AND by the scheduler,
     55 * thanks to the busylocks counter stored in the calling thread descriptor.
     56 * 1) all functions call "thread_assert_can_yield()" before calling "thread_yield()".
     57 * 2) The scheduler checks that the calling thread does not hold any busylock.
     58 * In case of violation the core goes to sleep after a [PANIC] message on TXT0.
    5659 ******************************************************************************************/
    5760
    5861/*******************************************************************************************
    5962 * This structure defines a busylock.
    60  * The <type> and <xlist> fields are used for debug.
    61  * The type defines the lock usage as detailed in the kernel_config.h file.
     63 * The <xlist> field is only used when DEBUG_BUSYLOCK is set.
     64 * The <type> field defines the lock usage as detailed in the kernel_config.h file.
    6265******************************************************************************************/
    6366
  • trunk/kernel/libk/remote_busylock.c

    r619 r624  
    22 * remote_busylock.c - remote kernel busy-waiting lock implementation.
    33 *
    4  * Authors     Alain Greiner (2016,2017,2018)
     4 * Authors     Alain Greiner (2016,2017,2018,2019)
    55 *
    66 * Copyright (c) UPMC Sorbonne Universites
     
    8787#if DEBUG_BUSYLOCK
    8888uint32_t type = hal_remote_l32( XPTR( lock_cxy , &lock_ptr->type ) );
    89 if( (type != LOCK_CHDEV_TXT0) &&
    90     ((uint32_t)hal_get_cycles() > DEBUG_BUSYLOCK) )
     89if( type != LOCK_CHDEV_TXT0 )
    9190{
     91    // update thread list of busyslocks
    9292    xptr_t root_xp = XPTR( local_cxy , &this->busylocks_root );
    93 
    94     // update thread list of busyslocks
    9593    xlist_add_last( root_xp , XPTR( lock_cxy  , &lock_ptr->xlist ) );
    9694}
    9795#endif
    9896
    99 #if( DEBUG_BUSYLOCK && DEBUG_BUSYLOCK_THREAD_XP )
     97#if( DEBUG_BUSYLOCK & 1 )
    10098if( (type != LOCK_CHDEV_TXT0) &&
    101     (XPTR( local_cxy , this ) == DEBUG_BUSYLOCK_THREAD_XP) )
     99    (this->process->pid == DEBUG_BUSYLOCK_PID) &&
     100    (this->trdid == DEBUG_BUSYLOCK_TRDID) )
    102101{
    103102    printk("\n[%s] thread[%x,%x] ACQUIRE lock %s\n",
     
    131130#if DEBUG_BUSYLOCK
    132131uint32_t type = hal_remote_l32( XPTR( lock_cxy , &lock_ptr->type ) );
    133 if( (type != LOCK_CHDEV_TXT0) &&
    134     (XPTR( local_cxy , this ) == DEBUG_BUSYLOCK_THREAD_XP) &&
    135     ((uint32_t)hal_get_cycles() > DEBUG_BUSYLOCK) )
     132if( type != LOCK_CHDEV_TXT0 )
    136133{
    137134    // remove lock from thread list of busyslocks
     
    140137#endif
    141138
    142 #if (DEBUG_BUSYLOCK && DEBUG_BUSYLOCK_THREAD_XP )
     139#if( DEBUG_BUSYLOCK & 1 )
    143140if( (type != LOCK_CHDEV_TXT0) &&
    144     (XPTR( local_cxy , this ) == DEBUG_BUSYLOCK_THREAD_XP) )
     141    (this->process->pid == DEBUG_BUSYLOCK_PID) &&
     142    (this->trdid == DEBUG_BUSYLOCK_TRDID) )
    145143{
    146144    printk("\n[%s] thread[%x,%x] RELEASE lock %s\n",
  • trunk/kernel/libk/remote_busylock.h

    r619 r624  
    22 * remote_busylock.h: remote kernel busy-waiting lock definition.     
    33 *
    4  * Authors  Alain Greiner (2016,2017,2018)
     4 * Authors  Alain Greiner (2016,2017,2018,2019)
    55 *
    66 * Copyright (c) UPMC Sorbonne Universites
     
    3737 * higher level synchronisation objects, such as remote_queuelock and remote_rwlock.
    3838 *
    39  * WARNING: a thread cannot yield when it is owning a busylock (local or remote).
    40  *
    4139 * - To acquire the lock, we use a ticket policy to avoid starvation: the calling thread
    4240 *   makes an atomic increment on a "ticket" allocator, and keep polling the "current"
    4341 *   value  until current == ticket.
    4442 *
    45  * - To release the lock, the owner thread increments the "current" value,
    46  *   decrements its busylocks counter.
     43 * - To release the lock, the owner thread increments the "current" value.
    4744 *
    48  * - When a thread takes a busylock, it enters a critical section: the busylock_acquire()
     45 * - When a thread takes a busylock, it enters a critical section: the acquire()
    4946 *   function disables the IRQs, takes the lock, increments the thread busylocks counter,
    50  *    save the SR in the lock descriptor and returns.
     47 *   save the SR in the lock descriptor and returns.
    5148 *
    52  * - The busylock_release() function decrements the thread busylock counter,
    53  *   restores the SR to exit the critical section, and returns
     49 * - The release() function releases the lock, decrements the thread busylock
     50 *   counter, restores the SR to exit the critical section, and returns.
    5451 *
    55  * - If a thread owning a busylock (local or remote) tries to deschedule, the scheduler
    56  *   signals a kernel panic.
     52 * WARNING: a thread cannot yield when it is holding a busylock (local or remote).
     53 *
     54 * This rule is checked by all functions containing a thread_yield() AND by the scheduler,
     55 * thanks to the busylocks counter stored in the calling thread descriptor.
     56 * 1) all functions call "thread_assert_can_yield()" before calling "thread_yield()".
     57 * 2) The scheduler checks that the calling thread does not hold any busylock.
     58 * In case of violation the core goes to sleep after a [PANIC] message on TXT0.
    5759 ******************************************************************************************/
    5860
Note: See TracChangeset for help on using the changeset viewer.