Architecture Porting Guide¶
An architecture port is needed to enable Zephyr to run on an ISA or an ABI that is not currently supported.
The following are examples of ISAs and ABIs that Zephyr supports:
x86_32 ISA with System V ABI
ARMv7-M ISA with Thumb2 instruction set and ARM Embedded ABI (aeabi)
ARCv2 ISA
For information on Kconfig configuration, see Setting Kconfig configuration values. Architectures use a Kconfig configuration scheme similar to boards.
An architecture port can be divided in several parts; most are required and some are optional:
The early boot sequence: each architecture has different steps it must take when the CPU comes out of reset (required).
Interrupt and exception handling: each architecture handles asynchronous and unrequested events in a specific manner (required).
Thread context switching: the Zephyr context switch is dependent on the ABI and each ISA has a different set of registers to save (required).
Thread creation and termination: A thread’s initial stack frame is ABI and architecture-dependent, and thread abortion possibly as well (required).
Device drivers: most often, the system clock timer and the interrupt controller are tied to the architecture (some required, some optional).
Utility libraries: some common kernel APIs rely on a architecture-specific implementation for performance reasons (required).
CPU idling/power management: most architectures implement instructions for putting the CPU to sleep (partly optional, most likely very desired).
Fault management: for implementing architecture-specific debug help and handling of fatal error in threads (partly optional).
Linker scripts and toolchains: architecture-specific details will most likely be needed in the build system and when linking the image (required).
Early Boot Sequence¶
The goal of the early boot sequence is to take the system from the state it is after reset to a state where is can run C code and thus the common kernel initialization sequence. Most of the time, very few steps are needed, while some architectures require a bit more work to be performed.
Common steps for all architectures:
Setup an initial stack.
If running an XIP kernel, copy initialized data
from ROM to RAM.
If not using an ELF loader, zero the BSS section.
Jump to
_Cstart()
, the early kernel initialization_Cstart()
is responsible for context switching out of the fake context running at startup into the main thread.
Some examples of architecture-specific steps that have to be taken:
If given control in real mode on x86_32, switch to 32-bit protected mode.
Setup the segment registers on x86_32 to handle boot loaders that leave them in an unknown or broken state.
Initialize a board-specific watchdog on Cortex-M3/4.
Switch stacks from MSP to PSP on Cortex-M.
Use a different approach than calling into _Swap() on Cortex-M to prevent race conditions.
Setup FIRQ and regular IRQ handling on ARCv2.
Interrupt and Exception Handling¶
Each architecture defines interrupt and exception handling differently.
When a device wants to signal the processor that there is some work to be done on its behalf, it raises an interrupt. When a thread does an operation that is not handled by the serial flow of the software itself, it raises an exception. Both, interrupts and exceptions, pass control to a handler. The handler is known as an ISR in the case of interrupts. The handler perform the work required the exception or the interrupt. For interrupts, that work is device-specific. For exceptions, it depends on the exception, but most often the core kernel itself is responsible for providing the handler.
The kernel has to perform some work in addition to the work the handler itself performs. For example:
Prior to handing control to the handler:
Save the currently executing context.
Possibly getting out of power saving mode, which includes waking up devices.
Updating the kernel uptime if getting out of tickless idle mode.
After getting control back from the handler:
Decide whether to perform a context switch.
When performing a context switch, restore the context being context switched in.
This work is conceptually the same across architectures, but the details are completely different:
The registers to save and restore.
The processor instructions to perform the work.
The numbering of the exceptions.
etc.
It thus needs an architecture-specific implementation, called the interrupt/exception stub.
Another issue is that the kernel defines the signature of ISRs as:
void (*isr)(void *parameter)
Architectures do not have a consistent or native way of handling parameters to an ISR. As such there are two commonly used methods for handling the parameter.
Using some architecture defined mechanism, the parameter value is forced in the stub. This is commonly found in X86-based architectures.
The parameters to the ISR are inserted and tracked via a separate table requiring the architecture to discover at runtime which interrupt is executing. A common interrupt handler demuxer is installed for all entries of the real interrupt vector table, which then fetches the device’s ISR and parameter from the separate table. This approach is commonly used in the ARC and ARM architectures via the
CONFIG_GEN_ISR_TABLES
implementation. You can find examples of the stubs by looking at_interrupt_enter()
in x86,_IntExit()
in ARM,_isr_wrapper()
in ARM, or the full implementation description for ARC in arch/arc/core/isr_wrapper.S.
Each architecture also has to implement primitives for interrupt control:
locking interrupts:
irq_lock()
,irq_unlock()
.registering interrupts:
IRQ_CONNECT()
.programming the priority if possible
irq_priority_set()
.enabling/disabling interrupts:
irq_enable()
,irq_disable()
.
Note
IRQ_CONNECT
is a macro that uses assembler and/or linker script
tricks to connect interrupts at build time, saving boot time and text size.
The vector table should contain a handler for each interrupt and exception that can possibly occur. The handler can be as simple as a spinning loop. However, we strongly suggest that handlers at least print some debug information. The information helps figuring out what went wrong when hitting an exception that is a fault, like divide-by-zero or invalid memory access, or an interrupt that is not expected (spurious interrupt). See the ARM implementation in arch/arm/core/aarch32/cortex_m/fault.c for an example.
Thread Context Switching¶
Multi-threading is the basic purpose to have a kernel at all. Zephyr supports two types of threads: preemptible and cooperative.
Two crucial concepts when writing an architecture port are the following:
Cooperative threads run at a higher priority than preemptible ones, and always preempt them.
After handling an interrupt, if a cooperative thread was interrupted, the kernel always goes back to running that thread, since it is not preemptible.
A context switch can happen in several circumstances:
When a thread executes a blocking operation, such as taking a semaphore that is currently unavailable.
When a preemptible thread unblocks a thread of higher priority by releasing the object on which it was blocked.
When an interrupt unblocks a thread of higher priority than the one currently executing, if the currently executing thread is preemptible.
When a thread runs to completion.
When a thread causes a fatal exception and is removed from the running threads. For example, referencing invalid memory,
Therefore, the context switching must thus be able to handle all these cases.
The kernel keeps the next thread to run in a “cache”, and thus the context switching code only has to fetch from that cache to select which thread to run.
There are two types of context switches: cooperative and preemptive.
A cooperative context switch happens when a thread willfully gives the control to another thread. There are two cases where this happens
When a thread explicitly yields.
When a thread tries to take an object that is currently unavailable and is willing to wait until the object becomes available.
A preemptive context switch happens either because an ISR or a thread causes an operation that schedules a thread of higher priority than the one currently running, if the currently running thread is preemptible. An example of such an operation is releasing an object on which the thread of higher priority was waiting.
Note
Control is never taken from cooperative thread when one of them is the running thread.
A cooperative context switch is always done by having a thread call the
_Swap()
kernel internal symbol. When _Swap
is called, the
kernel logic knows that a context switch has to happen: _Swap
does not
check to see if a context switch must happen. Rather, _Swap
decides
what thread to context switch in. _Swap
is called by the kernel logic
when an object being operated on is unavailable, and some thread
yielding/sleeping primitives.
Note
On x86 and Nios2, _Swap
is generic enough and the architecture
flexible enough that _Swap
can be called when exiting an interrupt
to provoke the context switch. This should not be taken as a rule, since
neither the ARM Cortex-M or ARCv2 port do this.
Since _Swap
is cooperative, the caller-saved registers from the ABI are
already on the stack. There is no need to save them in the k_thread structure.
A context switch can also be performed preemptively. This happens upon exiting an ISR, in the kernel interrupt exit stub:
_interrupt_enter
on x86 after the handler is called._IntExit
on ARM._firq_exit
and_rirq_exit
on ARCv2.
In this case, the context switch must only be invoked when the interrupted thread was preemptible, not when it was a cooperative one, and only when the current interrupt is not nested.
The kernel also has the concept of “locking the scheduler”. This is a concept similar to locking the interrupts, but lighter-weight since interrupts can still occur. If a thread has locked the scheduler, is it temporarily non-preemptible.
So, the decision logic to invoke the context switch when exiting an interrupt is simple:
If the interrupted thread is not preemptible, do not invoke it.
Else, fetch the cached thread from the ready queue, and:
If the cached thread is not the current thread, invoke the context switch.
Else, do not invoke it.
This is simple, but crucial: if this is not implemented correctly, the kernel will not function as intended and will experience bizarre crashes, mostly due to stack corruption.
Note
If running a coop-only system, i.e. if CONFIG_NUM_PREEMPT_PRIORITIES
is 0, no preemptive context switch ever happens. The interrupt code can be
optimized to not take any scheduling decision when this is the case.
Thread Creation and Termination¶
To start a new thread, a stack frame must be constructed so that the context
switch can pop it the same way it would pop one from a thread that had been
context switched out. This is to be implemented in an architecture-specific
_new_thread
internal routine.
The thread entry point is also not to be called directly, i.e. it should not be
set as the PC for the new thread. Rather it must be
wrapped in _thread_entry
. This means that the PC in the stack
frame shall be set to _thread_entry
, and the thread entry point shall
be passed as the first parameter to _thread_entry
. The specifics of
this depend on the ABI.
The need for an architecture-specific thread termination implementation depends on the architecture. There is a generic implementation, but it might not work for a given architecture.
One reason that has been encountered for having an architecture-specific implementation of thread termination is that aborting a thread might be different if aborting because of a graceful exit or because of an exception. This is the case for ARM Cortex-M, where the CPU has to be taken out of handler mode if the thread triggered a fatal exception, but not if the thread gracefully exits its entry point function.
This means implementing an architecture-specific version of
k_thread_abort()
, and setting the Kconfig option
CONFIG_ARCH_HAS_THREAD_ABORT
as needed for the architecture (e.g. see
arch/arm/core/aarch32/cortex_m/Kconfig).
Device Drivers¶
The kernel requires very few hardware devices to function. In theory, the only required device is the interrupt controller, since the kernel can run without a system clock. In practice, to get access to most, if not all, of the sanity check test suite, a system clock is needed as well. Since these two are usually tied to the architecture, they are part of the architecture port.
Interrupt Controllers¶
There can be significant differences between the interrupt controllers and the interrupt concepts across architectures.
For example, x86 has the concept of an IDT and different interrupt controllers. The position of an interrupt in the IDT determines its priority.
On the other hand, the ARM Cortex-M has the NVIC as part of the architecture definition. There is no need for an IDT-like table that is separate from the NVIC vector table. The position in the table has nothing to do with priority of an IRQ: priorities are programmable per-entry.
The ARCv2 has its interrupt unit as part of the architecture definition, which is somewhat similar to the NVIC. However, where ARC defines interrupts as having a one-to-one mapping between exception and interrupt numbers (i.e. exception 1 is IRQ1, and device IRQs start at 16), ARM has IRQ0 being equivalent to exception 16 (and weirdly enough, exception 1 can be seen as IRQ-15).
All these differences mean that very little, if anything, can be shared between architectures with regards to interrupt controllers.
System Clock¶
x86 has APIC timers and the HPET as part of its architecture definition. ARM Cortex-M has the SYSTICK exception. Finally, ARCv2 has the timer0/1 device.
Kernel timeouts are handled in the context of the system clock timer driver’s interrupt handler.
Tickless Idle¶
The kernel has support for tickless idle. Tickless idle is the concept where no system clock timer interrupt is to be delivered to the CPU when the kernel is about to go idle and the closest timeout expiry is passed a certain threshold. When this condition happens, the system clock is reprogrammed far in the future instead of for a periodic tick. For this to work, the system clock timer driver must support it.
Tickless idle is optional but strongly recommended to achieve low-power consumption.
The kernel has built-in support for going into tickless idle.
The system clock timer driver must implement some hooks to support tickless idle. See existing drivers for examples.
The interrupt entry stub (_interrupt_enter
, _isr_wrapper
) needs
to be adapted to handle exiting tickless idle. See examples in the code for
existing architectures.
Console Over Serial Line¶
There is one other device that is almost a requirement for an architecture
port, since it is so useful for debugging. It is a simple polling, output-only,
serial port driver on which to send the console (printk
,
printf
) output.
It is not required, and a RAM console (CONFIG_RAM_CONSOLE
)
can be used to send all output to a circular buffer that can be read
by a debugger instead.
Utility Libraries¶
The kernel depends on a few functions that can be implemented with very few instructions or in a lock-less manner in modern processors. Those are thus expected to be implemented as part of an architecture port.
Atomic operators.
If instructions do not exist for a given architecture, a generic version that wraps
irq_lock()
orirq_unlock()
around non-atomic operations exists. It is configured using theCONFIG_ATOMIC_OPERATIONS_C
Kconfig option.
Find-least-significant-bit-set and find-most-significant-bit-set.
If instructions do not exist for a given architecture, it is always possible to implement these functions as generic C functions.
It is possible to use compiler built-ins to implement these, but be careful they use the required compiler barriers.
CPU Idling/Power Management¶
The kernel provides support for CPU power management with two functions:
arch_cpu_idle()
and arch_cpu_atomic_idle()
.
arch_cpu_idle()
can be as simple as calling the power saving
instruction for the architecture with interrupts unlocked, for example
hlt
on x86, wfi
or wfe
on ARM, sleep
on ARC.
This function can be called in a loop within a context that does not care if it
get interrupted or not by an interrupt before going to sleep. There are
basically two scenarios when it is correct to use this function:
In a single-threaded system, in the only thread when the thread is not used for doing real work after initialization, i.e. it is sitting in a loop doing nothing for the duration of the application.
In the idle thread.
arch_cpu_atomic_idle()
, on the other hand, must be able to atomically
re-enable interrupts and invoke the power saving instruction. It can thus be
used in real application code, again in single-threaded systems.
Normally, idling the CPU should be left to the idle thread, but in some very special scenarios, these APIs can be used by applications.
Both functions must exist for a given architecture. However, the implementation can be simply the following steps, if desired:
unlock interrupts
NOP
However, a real implementation is strongly recommended.
Fault Management¶
In the event of an unhandled CPU exception, the architecture
code must call into z_fatal_error()
. This function dumps
out architecture-agnostic information and makes a policy
decision on what to do next by invoking k_sys_fatal_error()
.
This function can be overridden to implement application-specific
policies that could include locking interrupts and spinning forever
(the default implementation) or even powering off the
system (if supported).
Toolchain and Linking¶
Toolchain support has to be added to the build system.
Some architecture-specific definitions are needed in include/toolchain/gcc.h. See what exists in that file for currently supported architectures.
Each architecture also needs its own linker script, even if most sections can be derived from the linker scripts of other architectures. Some sections might be specific to the new architecture, for example the SCB section on ARM and the IDT section on x86.
Hardware Stack Protection¶
This option uses hardware features to generate a fatal error if a thread in supervisor mode overflows its stack. This is useful for debugging, although for a couple reasons, you can’t reliably make any assertions about the state of the system after this happens:
The kernel could have been inside a critical section when the overflow occurs, leaving important global data structures in a corrupted state.
For systems that implement stack protection using a guard memory region, it’s possible to overshoot the guard and corrupt adjacent data structures before the hardware detects this situation.
To enable the CONFIG_HW_STACK_PROTECTION
feature, the system must
provide some kind of hardware-based stack overflow protection, and enable the
CONFIG_ARCH_HAS_STACK_PROTECTION
option.
There are no C APIs that need to be implemented to support stack protection,
and it’s entirely implemented within the arch/
code. However in most cases
(such as if a guard region needs to be defined) the architecture will need to
declare its own versions of the K_THREAD_STACK macros in arch/cpu.h
:
_ARCH_THREAD_STACK_DEFINE()
_ARCH_THREAD_STACK_ARRAY_DEFINE()
_ARCH_THREAD_STACK_MEMBER()
_ARCH_THREAD_STACK_SIZEOF()
For systems that implement stack protection using a Memory Protection Unit (MPU) or Memory Management Unit (MMU), this is typically done by declaring a guard memory region immediately before the stack area.
On MMU systems, this guard area is an entire page whose permissions in the page table will generate a fault on writes. This page needs to be configured in the arch’s _new_thread() function.
On MPU systems, one of the MPU regions needs to be reserved for the thread stack guard area, whose size should be minimized. The region in the MPU should be reconfigured on context switch such that the guard region for the incoming thread is not writable.
User Mode Threads¶
To support user mode threads, several kernel-to-arch APIs need to be
implemented, and the system must enable the CONFIG_ARCH_HAS_USERSPACE
option. Please see the documentation for each of these functions for more
details:
arch_buffer_validate()
to test whether the current thread has access permissions to a particular memory regionarch_user_mode_enter()
which will irreversibly drop a supervisor thread to user mode privileges. The stack must be wiped.arch_syscall_oops()
which generates a kernel oops when system call parameters can’t be validated, in such a way that the oops appears to be generated from where the system call was invoked in the user threadarch_syscall_invoke0()
througharch_syscall_invoke6()
invoke a system call with the appropriate number of arguments which must all be passed in during the privilege elevation via registers.arch_is_user_context()
return nonzero if the CPU is currently running in user modearch_mem_domain_max_partitions_get()
which indicates the max number of regions for a memory domain. MMU systems have an unlimited amount, MPU systems have constraints on this.arch_mem_domain_partition_remove()
Remove a partition from a memory domain if the currently executing thread was part of that domain.arch_mem_domain_destroy()
Reset the thread’s memory domain configuration
In addition to implementing these APIs, there are some other tasks as well:
_new_thread()
needs to spawn threads withK_USER
in user modeOn context switch, the outgoing thread’s stack memory should be marked inaccessible to user mode by making the appropriate configuration changes in the memory management hardware.. The incoming thread’s stack memory should likewise be marked as accessible. This ensures that threads can’t mess with other thread stacks.
On context switch, the system needs to switch between memory domains for the incoming and outgoing threads.
Thread stack areas must include a kernel stack region. This should be inaccessible to user threads at all times. This stack will be used when system calls are made. This should be fixed size for all threads, and must be large enough to handle any system call.
A software interrupt or some kind of privilege elevation mechanism needs to be established. This is closely tied to how the _arch_syscall_invoke macros are implemented. On system call, the appropriate handler function needs to be looked up in _k_syscall_table. Bad system call IDs should jump to the
K_SYSCALL_BAD
handler. Upon completion of the system call, care must be taken not to leak any register state back to user mode.
API Reference¶
Timing¶
-
group
arch-timing
Threads¶
-
group
arch-threads
Functions
-
void
arch_new_thread
(struct k_thread *thread, k_thread_stack_t *pStack, size_t stackSize, k_thread_entry_t entry, void *p1, void *p2, void *p3, int prio, unsigned int options)¶ Handle arch-specific logic for setting up new threads
The stack and arch-specific thread state variables must be set up such that a later attempt to switch to this thread will succeed and we will enter z_thread_entry with the requested thread and arguments as its parameters.
At some point in this function’s implementation, z_setup_new_thread() must be called with the true bounds of the available stack buffer within the thread’s stack object.
- Parameters
thread
: Pointer to uninitialized struct k_threadpStack
: Pointer to the stack space.stackSize
: Stack size in bytes.entry
: Thread entry function.p1
: 1st entry point parameter.p2
: 2nd entry point parameter.p3
: 3rd entry point parameter.prio
: Thread priority.options
: Thread options.
-
static void
arch_switch
(void *switch_to, void **switched_from)¶ Cooperatively context switch
Architectures have considerable leeway on what the specific semantics of the switch handles are, but optimal implementations should do the following if possible:
1) Push all thread state relevant to the context switch to the current stack 2) Update the switched_from parameter to contain the current stack pointer, after all context has been saved. switched_from is used as an output- only parameter and its current value is ignored (and can be NULL, see below). 3) Set the stack pointer to the value provided in switch_to 4) Pop off all thread state from the stack we switched to and return.
Some arches may implement thread->switch handle as a pointer to the thread itself, and save context somewhere in thread->arch. In this case, on initial context switch from the dummy thread, thread->switch handle for the outgoing thread is NULL. Instead of dereferencing switched_from all the way to get the thread pointer, subtract ___thread_t_switch_handle_OFFSET to obtain the thread pointer instead. That is, such a scheme would have behavior like (in C pseudocode):
void arch_switch(void *switch_to, void **switched_from) { struct k_thread *new = switch_to; struct k_thread *old = CONTAINER_OF(switched_from, struct k_thread, switch_handle);
// save old context… *switched_from = old; // restore new context… }
Note that, regardless of the underlying handle representation, the incoming switched_from pointer MUST be written through with a non-NULL value after all relevant thread state has been saved. The kernel uses this as a synchronization signal to be able to wait for switch completion from another CPU.
- Parameters
switch_to
: Incoming thread’s switch handleswitched_from
: Pointer to outgoing thread’s switch handle storage location, which may be updated.
-
void
arch_switch_to_main_thread
(struct k_thread *main_thread, k_thread_stack_t *main_stack, size_t main_stack_size, k_thread_entry_t _main)¶ Custom logic for entering main thread context at early boot
Used by architectures where the typical trick of setting up a dummy thread in early boot context to “switch out” of isn’t workable.
- Parameters
main_thread
: main thread objectmain_stack
: main thread’s stack objectmain_stack_size
: Size of the stack object’s buffer_main
: Entry point for application main function.
-
int
arch_float_disable
(struct k_thread *thread)¶ Disable floating point context preservation.
The function is used to disable the preservation of floating point context information for a particular thread.
- Note
For ARM architecture, disabling floating point preservation may only be requested for the current thread and cannot be requested in ISRs.
- Return Value
0
: On success.-EINVAL
: If the floating point disabling could not be performed.
-
void
Power Management¶
-
group
arch-pm
Functions
-
FUNC_NORETURN void
arch_system_halt
(unsigned int reason)¶ Halt the system, optionally propagating a reason code
-
void
arch_cpu_idle
(void)¶ Power save idle routine.
This function will be called by the kernel idle loop or possibly within an implementation of z_sys_power_save_idle in the kernel when the ‘_sys_power_save_flag’ variable is non-zero.
Architectures that do not implement power management instructions may immediately return, otherwise a power-saving instruction should be issued to wait for an interrupt.
- Note
The function is expected to return after the interrupt that has caused the CPU to exit power-saving mode has been serviced, although this is not a firm requirement.
- See
k_cpu_idle()
-
void
arch_cpu_atomic_idle
(unsigned int key)¶ Atomically re-enable interrupts and enter low power mode.
The requirements for arch_cpu_atomic_idle() are as follows:
Enabling interrupts and entering a low-power mode needs to be atomic, i.e. there should be no period of time where interrupts are enabled before the processor enters a low-power mode. See the comments in k_lifo_get(), for example, of the race condition that occurs if this requirement is not met.
After waking up from the low-power mode, the interrupt lockout state must be restored as indicated in the ‘key’ input parameter.
- See
k_cpu_atomic_idle()
- Parameters
key
: Lockout key returned by previous invocation of arch_irq_lock()
-
FUNC_NORETURN void
Symmetric Multi-Processing¶
-
group
arch-smp
Typedefs
-
typedef FUNC_NORETURN void (*
arch_cpustart_t
)(void *data)¶ Per-cpu entry function
- Parameters
context
: parameter, implementation specific
Functions
-
void
arch_start_cpu
(int cpu_num, k_thread_stack_t *stack, int sz, arch_cpustart_t fn, void *arg)¶ Start a numbered CPU on a MP-capable system.
This starts and initializes a specific CPU. The main thread on startup is running on CPU zero, other processors are numbered sequentially. On return from this function, the CPU is known to have begun operating and will enter the provided function. Its interrupts will be initialized but disabled such that irq_unlock() with the provided key will work to enable them.
Normally, in SMP mode this function will be called by the kernel initialization and should not be used as a user API. But it is defined here for special-purpose apps which want Zephyr running on one core and to use others for design-specific processing.
- Parameters
cpu_num
: Integer number of the CPUstack
: Stack memory for the CPUsz
: Stack buffer size, in bytesfn
: Function to begin running on the CPU.arg
: Untyped argument to be passed to “fn”
-
static struct _cpu *
arch_curr_cpu
(void)¶ Return the CPU struct for the currently executing CPU
-
void
arch_sched_ipi
(void)¶ Broadcast an interrupt to all CPUs
This will invoke z_sched_ipi() on other CPUs in the system.
-
typedef FUNC_NORETURN void (*
Interrupts¶
-
group
arch-irq
Functions
-
static bool
arch_is_in_isr
(void)¶ Test if the current context is in interrupt context
XXX: This is inconsistently handled among arches wrt exception context See: #17656
- Return
true if we are in interrupt context
-
static unsigned int
arch_irq_lock
(void)¶ Lock interrupts on the current CPU
- See
irq_lock()
-
static void
arch_irq_unlock
(unsigned int key)¶ Unlock interrupts on the current CPU
- See
irq_unlock()
-
static bool
arch_irq_unlocked
(unsigned int key)¶ Test if calling arch_irq_unlock() with this key would unlock irqs
- Return
true if interrupts were unlocked prior to the arch_irq_lock() call that produced the key argument.
- Parameters
key
: value returned by arch_irq_lock()
-
void
arch_irq_disable
(unsigned int irq)¶ Disable the specified interrupt line
- See
irq_disable()
-
void
arch_irq_enable
(unsigned int irq)¶ Enable the specified interrupt line
- See
irq_enable()
-
int
arch_irq_is_enabled
(unsigned int irq)¶ Test if an interrupt line is enabled
- See
irq_is_enabled()
-
int
arch_irq_connect_dynamic
(unsigned int irq, unsigned int priority, void (*routine)(void *parameter), void *parameter, u32_t flags, )¶ Arch-specific hook to install a dynamic interrupt.
- Return
The vector assigned to this interrupt
- Parameters
irq
: IRQ line numberpriority
: Interrupt priorityroutine
: Interrupt service routineparameter
: ISR parameterflags
: Arch-specific IRQ configuration flag
-
static bool
Userspace¶
-
group
arch-userspace
Functions
-
static uintptr_t
arch_syscall_invoke0
(uintptr_t call_id)¶ Invoke a system call with 0 arguments.
No general-purpose register state other than return value may be preserved when transitioning from supervisor mode back down to user mode for security reasons.
It is required that all arguments be stored in registers when elevating privileges from user to supervisor mode.
Processing of the syscall takes place on a separate kernel stack. Interrupts should be enabled when invoking the system call marshallers from the dispatch table. Thread preemption may occur when handling system calls.
Call ids are untrusted and must be bounds-checked, as the value is used to index the system call dispatch table, containing function pointers to the specific system call code.
- Return
Return value of the system call. Void system calls return 0 here.
- Parameters
call_id
: System call ID
-
static uintptr_t
arch_syscall_invoke1
(uintptr_t arg1, uintptr_t call_id)¶ Invoke a system call with 1 argument.
- See
arch_syscall_invoke0()
- Return
Return value of the system call. Void system calls return 0 here.
- Parameters
arg1
: First argument to the system call.call_id
: System call ID, will be bounds-checked and used to reference kernel-side dispatch table
-
static uintptr_t
arch_syscall_invoke2
(uintptr_t arg1, uintptr_t arg2, uintptr_t call_id)¶ Invoke a system call with 2 arguments.
- See
arch_syscall_invoke0()
- Return
Return value of the system call. Void system calls return 0 here.
- Parameters
arg1
: First argument to the system call.arg2
: Second argument to the system call.call_id
: System call ID, will be bounds-checked and used to reference kernel-side dispatch table
-
static uintptr_t
arch_syscall_invoke3
(uintptr_t arg1, uintptr_t arg2, uintptr_t arg3, uintptr_t call_id)¶ Invoke a system call with 3 arguments.
- See
arch_syscall_invoke0()
- Return
Return value of the system call. Void system calls return 0 here.
- Parameters
arg1
: First argument to the system call.arg2
: Second argument to the system call.arg3
: Third argument to the system call.call_id
: System call ID, will be bounds-checked and used to reference kernel-side dispatch table
-
static uintptr_t
arch_syscall_invoke4
(uintptr_t arg1, uintptr_t arg2, uintptr_t arg3, uintptr_t arg4, uintptr_t call_id)¶ Invoke a system call with 4 arguments.
- See
arch_syscall_invoke0()
- Return
Return value of the system call. Void system calls return 0 here.
- Parameters
arg1
: First argument to the system call.arg2
: Second argument to the system call.arg3
: Third argument to the system call.arg4
: Fourth argument to the system call.call_id
: System call ID, will be bounds-checked and used to reference kernel-side dispatch table
-
static uintptr_t
arch_syscall_invoke5
(uintptr_t arg1, uintptr_t arg2, uintptr_t arg3, uintptr_t arg4, uintptr_t arg5, uintptr_t call_id)¶ Invoke a system call with 5 arguments.
- See
arch_syscall_invoke0()
- Return
Return value of the system call. Void system calls return 0 here.
- Parameters
arg1
: First argument to the system call.arg2
: Second argument to the system call.arg3
: Third argument to the system call.arg4
: Fourth argument to the system call.arg5
: Fifth argument to the system call.call_id
: System call ID, will be bounds-checked and used to reference kernel-side dispatch table
-
static uintptr_t
arch_syscall_invoke6
(uintptr_t arg1, uintptr_t arg2, uintptr_t arg3, uintptr_t arg4, uintptr_t arg5, uintptr_t arg6, uintptr_t call_id)¶ Invoke a system call with 6 arguments.
- See
arch_syscall_invoke0()
- Return
Return value of the system call. Void system calls return 0 here.
- Parameters
arg1
: First argument to the system call.arg2
: Second argument to the system call.arg3
: Third argument to the system call.arg4
: Fourth argument to the system call.arg5
: Fifth argument to the system call.arg6
: Sixth argument to the system call.call_id
: System call ID, will be bounds-checked and used to reference kernel-side dispatch table
-
static bool
arch_is_user_context
(void)¶ Indicate whether we are currently running in user mode
- Return
true if the CPU is currently running with user permissions
-
int
arch_mem_domain_max_partitions_get
(void)¶ Get the maximum number of partitions for a memory domain.
- Return
Max number of partitions, or -1 if there is no limit
-
void
arch_mem_domain_thread_add
(struct k_thread *thread)¶ Add a thread to a memory domain (arch-specific)
Architecture-specific hook to manage internal data structures or hardware state when the provided thread has been added to a memory domain.
The thread’s memory domain pointer will be set to the domain to be added to.
- Parameters
thread
: Thread which needs to be configured.
-
void
arch_mem_domain_thread_remove
(struct k_thread *thread)¶ Remove a thread from a memory domain (arch-specific)
Architecture-specific hook to manage internal data structures or hardware state when the provided thread has been removed from a memory domain.
The thread’s memory domain pointer will be the domain that the thread is being removed from.
- Parameters
thread
: Thread being removed from its memory domain
-
void
arch_mem_domain_partition_remove
(struct k_mem_domain *domain, u32_t partition_id)¶ Remove a partition from the memory domain (arch-specific)
Architecture-specific hook to manage internal data structures or hardware state when a memory domain has had a partition removed.
The partition index data, and the number of partitions configured, are not respectively cleared and decremented in the domain until after this function runs.
- Parameters
domain
: The memory domain structurepartition_id
: The partition index that needs to be deleted
-
void
arch_mem_domain_partition_add
(struct k_mem_domain *domain, u32_t partition_id)¶ Add a partition to the memory domain.
Architecture-specific hook to manage internal data structures or hardware state when a memory domain has a partition added.
- Parameters
domain
: The memory domain structurepartition_id
: The partition that needs to be added
-
void
arch_mem_domain_destroy
(struct k_mem_domain *domain)¶ Remove the memory domain.
Architecture-specific hook to manage internal data structures or hardware state when a memory domain has been destroyed.
Thread assignments to the memory domain are only cleared after this function runs.
- Parameters
domain
: The memory domain structure which needs to be deleted.
-
int
arch_buffer_validate
(void *addr, size_t size, int write)¶ Check memory region permissions.
Given a memory region, return whether the current memory management hardware configuration would allow a user thread to read/write that region. Used by system calls to validate buffers coming in from userspace.
Notes: The function is guaranteed to never return validation success, if the entire buffer area is not user accessible.
The function is guaranteed to correctly validate the permissions of the supplied buffer, if the user access permissions of the entire buffer are enforced by a single, enabled memory management region.
In some architectures the validation will always return failure if the supplied memory buffer spans multiple enabled memory management regions (even if all such regions permit user access).
- Warning
0 size buffer has undefined behavior.
- Return
nonzero if the permissions don’t match.
- Parameters
addr
: start address of the buffersize
: the size of the bufferwrite
: If nonzero, additionally check if the area is writable. Otherwise, just check if the memory can be read.
-
FUNC_NORETURN void
arch_user_mode_enter
(k_thread_entry_t user_entry, void *p1, void *p2, void *p3)¶ Perform a one-way transition from supervisor to kernel mode.
Implementations of this function must do the following:
Reset the thread’s stack pointer to a suitable initial value. We do not need any prior context since this is a one-way operation.
Set up any kernel stack region for the CPU to use during privilege elevation
Put the CPU in whatever its equivalent of user mode is
Transfer execution to arch_new_thread() passing along all the supplied arguments, in user mode.
- Parameters
user_entry
: Entry point to start executing as a user threadp1
: 1st parameter to user threadp2
: 2nd parameter to user threadp3
: 3rd parameter to user thread
-
FUNC_NORETURN void
arch_syscall_oops
(void *ssf)¶ Induce a kernel oops that appears to come from a specific location.
Normally, k_oops() generates an exception that appears to come from the call site of the k_oops() itself.
However, when validating arguments to a system call, if there are problems we want the oops to appear to come from where the system call was invoked and not inside the validation function.
- Parameters
ssf
: System call stack frame pointer. This gets passed as an argument to _k_syscall_handler_t functions and its contents are completely architecture specific.
-
size_t
arch_user_string_nlen
(const char *s, size_t maxsize, int *err)¶ Safely take the length of a potentially bad string.
This must not fault, instead the err parameter must have -1 written to it. This function otherwise should work exactly like libc strnlen(). On success *err should be set to 0.
- Return
Length of the string, not counting NULL byte, up to maxsize
- Parameters
s
: String to measuremaxsize
: Max length of the stringerr
: Error value to write
-
static uintptr_t
Benchmarking¶
-
group
arch-benchmarking
Variables
-
u64_t
arch_timing_swap_start
¶
-
u64_t
arch_timing_swap_end
¶
-
u64_t
arch_timing_irq_start
¶
-
u64_t
arch_timing_irq_end
¶
-
u64_t
arch_timing_tick_start
¶
-
u64_t
arch_timing_tick_end
¶
-
u64_t
arch_timing_user_mode_end
¶
-
u32_t
arch_timing_value_swap_end
¶
-
u64_t
arch_timing_value_swap_common
¶
-
u64_t
arch_timing_value_swap_temp
¶
-
u64_t
Miscellaneous Architecture APIs¶
-
group
arch-misc
Functions
-
static void
arch_kernel_init
(void)¶ Architecture-specific kernel initialization hook
This function is invoked near the top of _Cstart, for additional architecture-specific setup before the rest of the kernel is brought up.
TODO: Deprecate, most arches are using a prep_c() function to do the same thing in a simpler way
-
static void
arch_nop
(void)¶ Do nothing and return. Yawn.
-
static void