Changeset 632 for trunk/kernel/mm/ppm.h
- Timestamp:
- May 28, 2019, 2:56:04 PM (5 years ago)
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/kernel/mm/ppm.h
r625 r632 51 51 * 52 52 * The main service provided by the PMM is the dynamic allocation of physical pages 53 * from the "kernel_heap" section. This low-level allocator implements the buddy53 * from the "kernel_heap" section. This low-level allocator implements the "buddy" 54 54 * algorithm: an allocated block is an integer number n of small pages, where n 55 * is a power of 2, and ln(n) is called order. 56 * This allocator being shared by the local threads, the free_page lists rooted 57 * in the PPM descriptor are protected by a local busylock, because it is used 58 * by the idle_thread during kernel_init(). 59 * 60 * Another service is to register the dirty pages in a specific dirty_list, that is 55 * is a power of 2, and ln(n) is called order. The free_pages_root[] array contains 56 * the roots ot the local lists of free pages for different sizes, as required by 57 * the "buddy" algorithm. 58 * The local threads can access these free_lists by calling the ppm_alloc_pages() and 59 * ppm_free_page() functions, but the remote threads can access the same free lists, 60 * by calling the ppm_remote_alloc_pages() and ppm_remote_free_pages functions. 61 * Therefore, these free lists are protected by a remote_busy_lock. 62 * 63 * Another service is to register the dirty pages in a specific local dirty_list, 61 64 * also rooted in the PPM, in order to be able to synchronize all dirty pages on disk. 62 65 * This dirty list is protected by a specific remote_queuelock, because it can be 63 * modified by a remote thread , but it contains only local pages.66 * modified by a remote thread. 64 67 ****************************************************************************************/ 65 68 66 69 typedef struct ppm_s 67 70 { 68 busylock_tfree_lock; /*! lock protecting free_pages[] lists */71 remote_busylock_t free_lock; /*! lock protecting free_pages[] lists */ 69 72 list_entry_t free_pages_root[CONFIG_PPM_MAX_ORDER]; /*! roots of free lists */ 70 73 uint32_t free_pages_nr[CONFIG_PPM_MAX_ORDER]; /*! free pages number */ … … 80 83 81 84 /***************************************************************************************** 82 * This is the low-level physical pages allocation function. 83 * It allocates N contiguous physical pages. N is a power of 2. 84 * In normal use, it should not be called directly, as the recommended way to get 85 * physical pages is to call the generic allocator defined in kmem.h. 86 ***************************************************************************************** 87 * @ order : ln2( number of 4 Kbytes pages) 88 * @ returns a pointer on the page descriptor if success / NULL otherwise 85 * This local allocator must be called by a thread running in local cluster. 86 * It allocates n contiguous physical 4 Kbytes pages from the local cluster, where 87 * n is a power of 2 defined by the <order> argument. 88 * In normal use, it should not be called directly, as the recommended way to allocate 89 * physical pages is to call the generic allocator defined in kmem.h. 90 ***************************************************************************************** 91 * @ order : ln2( number of 4 Kbytes pages) 92 * @ returns a local pointer on the page descriptor if success / NULL if error. 89 93 ****************************************************************************************/ 90 94 page_t * ppm_alloc_pages( uint32_t order ); 91 95 92 96 /***************************************************************************************** 93 * This is the low-level physical pages release function. It takes the lock protecting 94 * the free_list before register the released page in the relevant free_list. 97 * This function must be called by a thread running in local cluster to release 98 * physical pages. It takes the lock protecting the free_lists before register the 99 * released page in the relevant free_list. 95 100 * In normal use, you do not need to call it directly, as the recommended way to free 96 101 * physical pages is to call the generic allocator defined in kmem.h. 97 102 ***************************************************************************************** 98 * @ page : pointer tothe page descriptor to be released103 * @ page : local pointer on the page descriptor to be released 99 104 ****************************************************************************************/ 100 105 void ppm_free_pages( page_t * page ); … … 105 110 * there is no concurrent access issue. 106 111 ***************************************************************************************** 107 * @ page : pointer tothe page descriptor to be released112 * @ page : local pointer on the page descriptor to be released 108 113 ****************************************************************************************/ 109 114 void ppm_free_pages_nolock( page_t * page ); 110 115 111 116 /***************************************************************************************** 112 * This function check if a page descriptor pointer is valid. 113 ***************************************************************************************** 114 * @ page : pointer on a page descriptor 115 * @ returns true if valid / false otherwise. 116 ****************************************************************************************/ 117 inline bool_t ppm_page_is_valid( page_t * page ); 117 * This remote allocator can be called by any thread running in any cluster. 118 * It allocates n contiguous physical 4 Kbytes pages from cluster identified 119 * by the <cxy> argument, where n is a power of 2 defined by the <order> argument. 120 * In normal use, it should not be called directly, as the recommended way to allocate 121 * physical pages is to call the generic allocator defined in kmem.h. 122 ***************************************************************************************** 123 * @ cxy : remote cluster identifier. 124 * @ order : ln2( number of 4 Kbytes pages) 125 * @ returns an extended pointer on the page descriptor if success / XPTR_NULL if error. 126 ****************************************************************************************/ 127 xptr_t ppm_remote_alloc_pages( cxy_t cxy, 128 uint32_t order ); 129 130 /***************************************************************************************** 131 * This function can be called by any thread running in any cluster to release physical 132 * pages to a remote cluster. It takes the lock protecting the free_list before register 133 * the released page in the relevant free_list. 134 * In normal use, you do not need to call it directly, as the recommended way to free 135 * physical pages is to call the generic allocator defined in kmem.h. 136 ***************************************************************************************** 137 * @ cxy : remote cluster identifier. 138 * @ page : local pointer on the page descriptor to be released in remote cluster. 139 ****************************************************************************************/ 140 void ppm_remote_free_pages( cxy_t cxy, 141 page_t * page ); 142 143 /***************************************************************************************** 144 * This debug function can be called by any thread running in any cluster to display 145 * the current PPM state of a remote cluster. 146 ***************************************************************************************** 147 * @ cxy : remote cluster identifier. 148 ****************************************************************************************/ 149 void ppm_remote_display( cxy_t cxy ); 118 150 119 151 … … 172 204 173 205 /***************************************************************************************** 174 * This function prints the PPM allocator status in the calling thread cluster. 175 ***************************************************************************************** 176 * string : character string printed in header 177 ****************************************************************************************/ 178 void ppm_display( void ); 179 180 /***************************************************************************************** 181 * This function checks PPM allocator consistency. 182 ***************************************************************************************** 183 * @ ppm : pointer on PPM allocator. 206 * This function can be called by any thread running in any cluster. 207 * It displays the PPM allocator status in cluster identified by the <cxy> argument. 208 ***************************************************************************************** 209 * @ cxy : remote cluster 210 ****************************************************************************************/ 211 void ppm_remote_display( cxy_t cxy ); 212 213 /***************************************************************************************** 214 * This function must be called by a thread running in local cluster. 215 * It checks the consistency of the local PPM allocator. 216 ***************************************************************************************** 184 217 * @ return 0 if PPM is OK / return -1 if PPM not consistent. 185 218 ****************************************************************************************/ 186 error_t ppm_assert_order( ppm_t * ppm);219 error_t ppm_assert_order( void ); 187 220 188 221
Note: See TracChangeset
for help on using the changeset viewer.