Zephyr API Documentation
3.5.0
A Scalable Open Source RTOS
|
|
3.5.0 |
#include <zephyr/sys/util.h>
#include <zephyr/toolchain.h>
#include <stdint.h>
#include <stddef.h>
#include <inttypes.h>
#include <zephyr/sys/__assert.h>
#include <syscalls/mem_manage.h>
Go to the source code of this file.
Data Structures | |
struct | k_mem_paging_stats_t |
struct | k_mem_paging_histogram_t |
Macros | |
#define | K_MEM_CACHE_NONE 2 |
No caching. | |
#define | K_MEM_CACHE_WT 1 |
Write-through caching. | |
#define | K_MEM_CACHE_WB 0 |
Full write-back caching. | |
#define | K_MEM_CACHE_MASK (BIT(3) - 1) |
Reserved bits for cache modes in k_map() flags argument. | |
#define | K_MEM_PERM_RW BIT(3) |
Region will have read/write access (and not read-only) | |
#define | K_MEM_PERM_EXEC BIT(4) |
Region will be executable (normally forbidden) | |
#define | K_MEM_PERM_USER BIT(5) |
Region will be accessible to user mode (normally supervisor-only) | |
#define | K_MEM_DIRECT_MAP BIT(6) |
Region will be mapped to 1:1 virtual and physical address. | |
#define | K_MEM_MAP_UNINIT BIT(16) |
The mapped region is not guaranteed to be zeroed. | |
#define | K_MEM_MAP_LOCK BIT(17) |
Region will be pinned in memory and never paged. | |
Functions | |
size_t | k_mem_free_get (void) |
Return the amount of free memory available. | |
void * | k_mem_map (size_t size, uint32_t flags) |
Map anonymous memory into Zephyr's address space. | |
void | k_mem_unmap (void *addr, size_t size) |
Un-map mapped memory. | |
size_t | k_mem_region_align (uintptr_t *aligned_addr, size_t *aligned_size, uintptr_t addr, size_t size, size_t align) |
Given an arbitrary region, provide a aligned region that covers it. | |
int | k_mem_page_out (void *addr, size_t size) |
Evict a page-aligned virtual memory region to the backing store. | |
void | k_mem_page_in (void *addr, size_t size) |
Load a virtual data region into memory. | |
void | k_mem_pin (void *addr, size_t size) |
Pin an aligned virtual data region, paging in as necessary. | |
void | k_mem_unpin (void *addr, size_t size) |
Un-pin an aligned virtual data region. | |
void | k_mem_paging_stats_get (struct k_mem_paging_stats_t *stats) |
Get the paging statistics since system startup. | |
void | k_mem_paging_thread_stats_get (struct k_thread *thread, struct k_mem_paging_stats_t *stats) |
Get the paging statistics since system startup for a thread. | |
void | k_mem_paging_histogram_eviction_get (struct k_mem_paging_histogram_t *hist) |
Get the eviction timing histogram. | |
void | k_mem_paging_histogram_backing_store_page_in_get (struct k_mem_paging_histogram_t *hist) |
Get the backing store page-in timing histogram. | |
void | k_mem_paging_histogram_backing_store_page_out_get (struct k_mem_paging_histogram_t *hist) |
Get the backing store page-out timing histogram. | |
struct z_page_frame * | k_mem_paging_eviction_select (bool *dirty) |
Select a page frame for eviction. | |
void | k_mem_paging_eviction_init (void) |
Initialization function. | |
int | k_mem_paging_backing_store_location_get (struct z_page_frame *pf, uintptr_t *location, bool page_fault) |
Reserve or fetch a storage location for a data page loaded into a page frame. | |
void | k_mem_paging_backing_store_location_free (uintptr_t location) |
Free a backing store location. | |
void | k_mem_paging_backing_store_page_out (uintptr_t location) |
Copy a data page from Z_SCRATCH_PAGE to the specified location. | |
void | k_mem_paging_backing_store_page_in (uintptr_t location) |
Copy a data page from the provided location to Z_SCRATCH_PAGE. | |
void | k_mem_paging_backing_store_page_finalize (struct z_page_frame *pf, uintptr_t location) |
Update internal accounting after a page-in. | |
void | k_mem_paging_backing_store_init (void) |
Backing store initialization function. | |
#define K_MEM_CACHE_MASK (BIT(3) - 1) |
Reserved bits for cache modes in k_map() flags argument.
#define K_MEM_CACHE_NONE 2 |
No caching.
Most drivers want this.
#define K_MEM_CACHE_WB 0 |
Full write-back caching.
Any RAM mapped wants this.
#define K_MEM_CACHE_WT 1 |
Write-through caching.
Used by certain drivers.
#define K_MEM_DIRECT_MAP BIT(6) |
Region will be mapped to 1:1 virtual and physical address.
#define K_MEM_MAP_LOCK BIT(17) |
Region will be pinned in memory and never paged.
Such memory is guaranteed to never produce a page fault due to page-outs or copy-on-write once the mapping call has returned. Physical page frames will be pre-fetched as necessary and pinned.
#define K_MEM_MAP_UNINIT BIT(16) |
The mapped region is not guaranteed to be zeroed.
This may improve performance. The associated page frames may contain indeterminate data, zeroes, or even sensitive information.
This may not be used with K_MEM_PERM_USER as there are no circumstances where this is safe.
#define K_MEM_PERM_EXEC BIT(4) |
Region will be executable (normally forbidden)
#define K_MEM_PERM_RW BIT(3) |
Region will have read/write access (and not read-only)
#define K_MEM_PERM_USER BIT(5) |
Region will be accessible to user mode (normally supervisor-only)
size_t k_mem_free_get | ( | void | ) |
Return the amount of free memory available.
The returned value will reflect how many free RAM page frames are available. If demand paging is enabled, it may still be possible to allocate more.
The information reported by this function may go stale immediately if concurrent memory mappings or page-ins take place.
Map anonymous memory into Zephyr's address space.
This function effectively increases the data space available to Zephyr. The kernel will choose a base virtual address and return it to the caller. The memory will have access permissions for all contexts set per the provided flags argument.
If user thread access control needs to be managed in any way, do not enable K_MEM_PERM_USER flags here; instead manage the region's permissions with memory domain APIs after the mapping has been established. Setting K_MEM_PERM_USER here will allow all user threads to access this memory which is usually undesirable.
Unless K_MEM_MAP_UNINIT is used, the returned memory will be zeroed.
The mapped region is not guaranteed to be physically contiguous in memory. Physically contiguous buffers should be allocated statically and pinned at build time.
Pages mapped in this way have write-back cache settings.
The returned virtual memory pointer will be page-aligned. The size parameter, and any base address for re-mapping purposes must be page- aligned.
Note that the allocation includes two guard pages immediately before and after the requested region. The total size of the allocation will be the requested size plus the size of these two guard pages.
Many K_MEM_MAP_* flags have been implemented to alter the behavior of this function, with details in the documentation for these flags.
size | Size of the memory mapping. This must be page-aligned. |
flags | K_MEM_PERM_*, K_MEM_MAP_* control flags. |
size_t k_mem_region_align | ( | uintptr_t * | aligned_addr, |
size_t * | aligned_size, | ||
uintptr_t | addr, | ||
size_t | size, | ||
size_t | align | ||
) |
Given an arbitrary region, provide a aligned region that covers it.
The returned region will have both its base address and size aligned to the provided alignment value.
aligned_addr | [out] Aligned address |
aligned_size | [out] Aligned region size |
addr | Region base address |
size | Region size |
align | What to align the address and size to |
offset | between aligned_addr and addr |
void k_mem_unmap | ( | void * | addr, |
size_t | size | ||
) |
Un-map mapped memory.
This removes a memory mapping for the provided page-aligned region. Associated page frames will be free and the kernel may re-use the associated virtual address region. Any paged out data pages may be discarded.
Calling this function on a region which was not mapped to begin with is undefined behavior.
addr | Page-aligned memory region base virtual address |
size | Page-aligned memory region size |