Opened 14 years ago
Last modified 14 years ago
#58 new enhancement
Memory allocator API change
Reported by: | Nicolas Pouillon | Owned by: | becoulet |
---|---|---|---|
Priority: | critical | Milestone: | Topology handling |
Component: | mutek | Keywords: | |
Cc: |
Description (last modified by )
Introduction
SOTA
Currently, there is:
- mem_alloc API
- mem_region pools (optional)
- memory_allocator API
- mem_region pools (optional)
mem_alloc
only takes a reachability constraint, which is the access scope. Optimization scope is not in the API and is implicit. If region are activated, a memory_allocator
is chosen depending on the scope and the current CPU. Hack was added to get a foreign scope context: mem_alloc_cpu, making an indirection through a CLS.
Here we see the scope serves two purposes:
- A reachability constraint ("the allocated memory should be accessible from X")
- A proximity (optimization) constraint (implicit, or CLS-based)
This is broken
Proposed evolution
mem_alloc API should take two parameters:
- A reachability constraint
- "The allocated memory *may* be accessed from X"
- A proxymity scope
- "The allocated memory should preferably be close to X"
- Better error reporting. Returning NULL is not enough
Some validities:
- SYS: The whole system may access the data
- Stacks (shm with stacked objects)
- schedulers (anyone may push a task in it)
- system globals
- CLS
- user tasks data
- (most variables)
- DMA: This is a subset of SYS, telling devices should be able to DMA there
- Not all memory map supports DMA
- Some processors reserve zones for DMA (with different cacheability)
- CPU: Zone accessible solely from a CPU (scratchpad-like)
- Can only put purely private data in it
- No support for migration
- Probably of no use in the kernel (maybe useful in tasks)
- CLUSTER: Zone accessible solely from a cluster
- Like a shared scratchpad, this will probably exist in ADAM
Some proximities:
- Once per cluster (each time a cost metric changes)
- A global one
- when allocating a truly shared data, there is no good placement choice therefore we could use a "global" proximity, taking a random memory bank
APIs
Kernel-wise API
struct mem_proximity_s { error_t (*alloc)( size_t size, void *priv, enum mem_reachability_s valid, void **addr); void *priv; }; enum mem_reachability_s { MEM_VALID_SYS, MEM_VALID_DMA, MEM_VALID_CLUSTER, MEM_VALID_CPU, }; error_t mem_alloc( size_t size, struct mem_proximity_s *prox, enum mem_reachability_s valid, void **addr); void mem_free(void *addr);
Then we can have:
- One mem_proximity_s per cluster, taking from allocators with a preference scheme
- One global mem_proximity_s, taking one random allocator
On µC and other small designs, mem_alloc may just ignore prox and/or valid arguments.
Memory allocator API
Mostly unchanged. Just need error reporting.
We should just replace the allocating call with:
/** @this allocate a new memory block in given region */ error_t memory_allocator_pop( struct memory_allocator_region_s *region, size_t size, void **addr);
Change History (2)
comment:1 Changed 14 years ago by
comment:2 Changed 14 years ago by
Description: | modified (diff) |
---|
s/validity/reachability/ s/superset/subset/
There was also an idea about factorization of low-level allocator APIs:
This could lead to:
Generic part
Specific APIs
Fine-grained memory allocator
Together with the generic parts, they can add the usual other functions:
Physical page allocator
Upper-level factorization
Proximity API could then be used either for page allocators or fine-grained allocators