Opened 14 years ago

Last modified 14 years ago

#58 new enhancement

Memory allocator API change — at Initial Version

Reported by: Nicolas Pouillon Owned by: becoulet
Priority: critical Milestone: Topology handling
Component: mutek Keywords:
Cc:

Description

Introduction

SOTA

Currently, there is:

  • mem_alloc API
    • mem_region pools (optional)
      • memory_allocator API

mem_alloc only takes a validity constraint, which is the access scope. Optimization scope is not in the API and is implicit. If region are activated, a memory_allocator is chosen depending on the scope and the current CPU. Hack was added to get a foreign scope context: mem_alloc_cpu, making an indirection through a CLS.

Here we see the scope serves two purposes:

  • A validity constraint ("the allocated memory should be accessible from X")
  • A proximity (optimization) constraint (implicit, or CLS-based)

This is broken

Proposed evolution

mem_alloc API should take two parameters:

  • A validity constraint
    • "The allocated memory *may* be accessed from X"
  • A proxymity scope
    • "The allocated memory should preferably be close to X"
  • Better error reporting. Returning NULL is not enough

Some validities:

  • SYS: The whole system may access the data
    • Stacks (shm with stacked objects)
    • schedulers (anyone may push a task in it)
    • system globals
    • CLS
    • user tasks data
    • (most variables)
  • DMA: This is a superset of SYS, telling devices should be able to DMA there
    • Not all memory map supports DMA
    • Some processors reserve zones for DMA (with different cacheability)
  • CPU: Zone accessible solely from a CPU (scratchpad-like)
    • Can only put purely private data in it
    • No support for migration
    • Probably of no use in the kernel (maybe useful in tasks)
  • CLUSTER: Zone accessible solely from a cluster
    • Like a shared scratchpad, this will probably exist in ADAM

Some proximities:

  • Once per cluster (each time a cost metric changes)
  • A global one
    • when allocating a truly shared data, there is no good placement choice therefore we could use a "global" proximity, taking a random memory bank

APIs

Kernel-wise API

struct mem_proximity_s
{
    error_t (*alloc)(
        size_t size,
        void *priv,
        enum mem_validity_s valid,
        void **addr);
    void *priv;
};

enum mem_validity_s
{
    MEM_VALID_SYS,
    MEM_VALID_DMA,
    MEM_VALID_CLUSTER,
    MEM_VALID_CPU,
};

error_t mem_alloc(
    size_t size,
    struct mem_proximity_s *prox,
    enum mem_validity_s valid,
    void **addr);

void mem_free(void *addr);

Then we can have:

  • One mem_proximity_s per cluster, taking from allocators with a preference scheme
  • One global mem_proximity_s, taking one random allocator

On µC and other small designs, mem_alloc may just ignore prox and/or valid arguments.

Memory allocator API

Mostly unchanged. Just need error reporting.

We should just replace the allocating call with:

/** @this allocate a new memory block in given region */
error_t memory_allocator_pop(
    struct memory_allocator_region_s *region,
    size_t size,
    void **addr);

Change History (0)

Note: See TracTickets for help on using tickets.