The latest development version of this page may be more current than this released 1.14.1 version.

Threads

This section describes kernel services for creating, scheduling, and deleting independently executable threads of instructions.

Lifecycle

A thread is a kernel object that is used for application processing that is too lengthy or too complex to be performed by an ISR.

Concepts

Any number of threads can be defined by an application. Each thread is referenced by a thread id that is assigned when the thread is spawned.

A thread has the following key properties:

  • A stack area, which is a region of memory used for the thread’s stack. The size of the stack area can be tailored to conform to the actual needs of the thread’s processing. Special macros exist to create and work with stack memory regions.
  • A thread control block for private kernel bookkeeping of the thread’s metadata. This is an instance of type struct k_thread.
  • An entry point function, which is invoked when the thread is started. Up to 3 argument values can be passed to this function.
  • A scheduling priority, which instructs the kernel’s scheduler how to allocate CPU time to the thread. (See Scheduling.)
  • A set of thread options, which allow the thread to receive special treatment by the kernel under specific circumstances. (See Thread Options.)
  • A start delay, which specifies how long the kernel should wait before starting the thread.
  • An execution mode, which can either be supervisor or user mode. By default, threads run in supervisor mode and allow access to privileged CPU instructions, the entire memory address space, and peripherals. User mode threads have a reduced set of privileges. This depends on the CONFIG_USERSPACE option. See User Mode.

Thread Creation

A thread must be created before it can be used. The kernel initializes the thread control block as well as one end of the stack portion. The remainder of the thread’s stack is typically left uninitialized.

Specifying a start delay of K_NO_WAIT instructs the kernel to start thread execution immediately. Alternatively, the kernel can be instructed to delay execution of the thread by specifying a timeout value – for example, to allow device hardware used by the thread to become available.

The kernel allows a delayed start to be canceled before the thread begins executing. A cancellation request has no effect if the thread has already started. A thread whose delayed start was successfully canceled must be re-spawned before it can be used.

Thread Termination

Once a thread is started it typically executes forever. However, a thread may synchronously end its execution by returning from its entry point function. This is known as termination.

A thread that terminates is responsible for releasing any shared resources it may own (such as mutexes and dynamically allocated memory) prior to returning, since the kernel does not reclaim them automatically.

Note

The kernel does not currently make any claims regarding an application’s ability to respawn a thread that terminates.

Thread Aborting

A thread may asynchronously end its execution by aborting. The kernel automatically aborts a thread if the thread triggers a fatal error condition, such as dereferencing a null pointer.

A thread can also be aborted by another thread (or by itself) by calling k_thread_abort(). However, it is typically preferable to signal a thread to terminate itself gracefully, rather than aborting it.

As with thread termination, the kernel does not reclaim shared resources owned by an aborted thread.

Note

The kernel does not currently make any claims regarding an application’s ability to respawn a thread that aborts.

Thread Suspension

A thread can be prevented from executing for an indefinite period of time if it becomes suspended. The function k_thread_suspend() can be used to suspend any thread, including the calling thread. Suspending a thread that is already suspended has no additional effect.

Once suspended, a thread cannot be scheduled until another thread calls k_thread_resume() to remove the suspension.

Note

A thread can prevent itself from executing for a specified period of time using k_sleep(). However, this is different from suspending a thread since a sleeping thread becomes executable automatically when the time limit is reached.

Thread Options

The kernel supports a small set of thread options that allow a thread to receive special treatment under specific circumstances. The set of options associated with a thread are specified when the thread is spawned.

A thread that does not require any thread option has an option value of zero. A thread that requires a thread option specifies it by name, using the | character as a separator if multiple options are needed (i.e. combine options using the bitwise OR operator).

The following thread options are supported.

K_ESSENTIAL

This option tags the thread as an essential thread. This instructs the kernel to treat the termination or aborting of the thread as a fatal system error.

By default, the thread is not considered to be an essential thread.

K_FP_REGS and K_SSE_REGS

These x86-specific options indicate that the thread uses the CPU’s floating point registers and SSE registers, respectively. This instructs the kernel to take additional steps to save and restore the contents of these registers when scheduling the thread. (For more information see Floating Point Services.)

By default, the kernel does not attempt to save and restore the contents of these registers when scheduling the thread.

K_USER
If CONFIG_USERSPACE is enabled, this thread will be created in user mode and will have reduced privileges. See User Mode. Otherwise this flag does nothing.
K_INHERIT_PERMS
If CONFIG_USERSPACE is enabled, this thread will inherit all kernel object permissions that the parent thread had, except the parent thread object. See User Mode.

Implementation

Spawning a Thread

A thread is spawned by defining its stack area and its thread control block, and then calling k_thread_create(). The stack area must be defined using K_THREAD_STACK_DEFINE to ensure it is properly set up in memory.

The thread spawning function returns its thread id, which can be used to reference the thread.

The following code spawns a thread that starts immediately.

#define MY_STACK_SIZE 500
#define MY_PRIORITY 5

extern void my_entry_point(void *, void *, void *);

K_THREAD_STACK_DEFINE(my_stack_area, MY_STACK_SIZE);
struct k_thread my_thread_data;

k_tid_t my_tid = k_thread_create(&my_thread_data, my_stack_area,
                                 K_THREAD_STACK_SIZEOF(my_stack_area),
                                 my_entry_point,
                                 NULL, NULL, NULL,
                                 MY_PRIORITY, 0, K_NO_WAIT);

Alternatively, a thread can be spawned at compile time by calling K_THREAD_DEFINE. Observe that the macro defines the stack area, control block, and thread id variables automatically.

The following code has the same effect as the code segment above.

#define MY_STACK_SIZE 500
#define MY_PRIORITY 5

extern void my_entry_point(void *, void *, void *);

K_THREAD_DEFINE(my_tid, MY_STACK_SIZE,
                my_entry_point, NULL, NULL, NULL,
                MY_PRIORITY, 0, K_NO_WAIT);
User Mode Constraints

This section only applies if CONFIG_USERSPACE is enabled, and a user thread tries to create a new thread. The k_thread_create() API is still used, but there are additional constraints which must be met or the calling thread will be terminated:

  • The calling thread must have permissions granted on both the child thread and stack parameters; both are tracked by the kernel as kernel objects.
  • The child thread and stack objects must be in an uninitialized state, i.e. it is not currently running and the stack memory is unused.
  • The stack size parameter passed in must be equal to or less than the bounds of the stack object when it was declared.
  • The K_USER option must be used, as user threads can only create other user threads.
  • The K_ESSENTIAL option must not be used, user threads may not be considered essential threads.
  • The priority of the child thread must be a valid priority value, and equal to or lower than the parent thread.

Dropping Permissions

If CONFIG_USERSPACE is enabled, a thread running in supervisor mode may perform a one-way transition to user mode using the k_thread_user_mode_enter() API. This is a one-way operation which will reset and zero the thread’s stack memory. The thread will be marked as non-essential.

Terminating a Thread

A thread terminates itself by returning from its entry point function.

The following code illustrates the ways a thread can terminate.

void my_entry_point(int unused1, int unused2, int unused3)
{
    while (1) {
        ...
        if (<some condition>) {
            return; /* thread terminates from mid-entry point function */
        }
        ...
    }

    /* thread terminates at end of entry point function */
}

If CONFIG_USERSPACE is enabled, aborting a thread will additionally mark the thread and stack objects as uninitialized so that they may be re-used.

Suggested Uses

Use threads to handle processing that cannot be handled in an ISR.

Use separate threads to handle logically distinct processing operations that can execute in parallel.

Scheduling

The kernel’s priority-based scheduler allows an application’s threads to share the CPU.

Concepts

The scheduler determines which thread is allowed to execute at any point in time; this thread is known as the current thread.

Whenever the scheduler changes the identity of the current thread, or when execution of the current thread is supplanted by an ISR, the kernel first saves the current thread’s CPU register values. These register values get restored when the thread later resumes execution.

Thread States

A thread that has no factors that prevent its execution is deemed to be ready, and is eligible to be selected as the current thread.

A thread that has one or more factors that prevent its execution is deemed to be unready, and cannot be selected as the current thread.

The following factors make a thread unready:

  • The thread has not been started.
  • The thread is waiting on for a kernel object to complete an operation. (For example, the thread is taking a semaphore that is unavailable.)
  • The thread is waiting for a timeout to occur.
  • The thread has been suspended.
  • The thread has terminated or aborted.

Thread Priorities

A thread’s priority is an integer value, and can be either negative or non-negative. Numerically lower priorities takes precedence over numerically higher values. For example, the scheduler gives thread A of priority 4 higher priority over thread B of priority 7; likewise thread C of priority -2 has higher priority than both thread A and thread B.

The scheduler distinguishes between two classes of threads, based on each thread’s priority.

  • A cooperative thread has a negative priority value. Once it becomes the current thread, a cooperative thread remains the current thread until it performs an action that makes it unready.
  • A preemptible thread has a non-negative priority value. Once it becomes the current thread, a preemptible thread may be supplanted at any time if a cooperative thread, or a preemptible thread of higher or equal priority, becomes ready.

A thread’s initial priority value can be altered up or down after the thread has been started. Thus it possible for a preemptible thread to become a cooperative thread, and vice versa, by changing its priority.

The kernel supports a virtually unlimited number of thread priority levels. The configuration options CONFIG_NUM_COOP_PRIORITIES and CONFIG_NUM_PREEMPT_PRIORITIES specify the number of priority levels for each class of thread, resulting the following usable priority ranges:

For example, configuring 5 cooperative priorities and 10 preemptive priorities results in the ranges -5 to -1 and 0 to 9, respectively.

Scheduling Algorithm

The kernel’s scheduler selects the highest priority ready thread to be the current thread. When multiple ready threads of the same priority exist, the scheduler chooses the one that has been waiting longest.

Note

Execution of ISRs takes precedence over thread execution, so the execution of the current thread may be supplanted by an ISR at any time unless interrupts have been masked. This applies to both cooperative threads and preemptive threads.

Cooperative Time Slicing

Once a cooperative thread becomes the current thread, it remains the current thread until it performs an action that makes it unready. Consequently, if a cooperative thread performs lengthy computations, it may cause an unacceptable delay in the scheduling of other threads, including those of higher priority and equal priority.

To overcome such problems, a cooperative thread can voluntarily relinquish the CPU from time to time to permit other threads to execute. A thread can relinquish the CPU in two ways:

  • Calling k_yield() puts the thread at the back of the scheduler’s prioritized list of ready threads, and then invokes the scheduler. All ready threads whose priority is higher or equal to that of the yielding thread are then allowed to execute before the yielding thread is rescheduled. If no such ready threads exist, the scheduler immediately reschedules the yielding thread without context switching.
  • Calling k_sleep() makes the thread unready for a specified time period. Ready threads of all priorities are then allowed to execute; however, there is no guarantee that threads whose priority is lower than that of the sleeping thread will actually be scheduled before the sleeping thread becomes ready once again.

Preemptive Time Slicing

Once a preemptive thread becomes the current thread, it remains the current thread until a higher priority thread becomes ready, or until the thread performs an action that makes it unready. Consequently, if a preemptive thread performs lengthy computations, it may cause an unacceptable delay in the scheduling of other threads, including those of equal priority.

To overcome such problems, a preemptive thread can perform cooperative time slicing (as described above), or the scheduler’s time slicing capability can be used to allow other threads of the same priority to execute.

The scheduler divides time into a series of time slices, where slices are measured in system clock ticks. The time slice size is configurable, but this size can be changed while the application is running.

At the end of every time slice, the scheduler checks to see if the current thread is preemptible and, if so, implicitly invokes k_yield() on behalf of the thread. This gives other ready threads of the same priority the opportunity to execute before the current thread is scheduled again. If no threads of equal priority are ready, the current thread remains the current thread.

Threads with a priority higher than specified limit are exempt from preemptive time slicing, and are never preempted by a thread of equal priority. This allows an application to use preemptive time slicing only when dealing with lower priority threads that are less time-sensitive.

Note

The kernel’s time slicing algorithm does not ensure that a set of equal-priority threads receive an equitable amount of CPU time, since it does not measure the amount of time a thread actually gets to execute. For example, a thread may become the current thread just before the end of a time slice and then immediately have to yield the CPU. However, the algorithm does ensure that a thread never executes for longer than a single time slice without being required to yield.

Scheduler Locking

A preemptible thread that does not wish to be preempted while performing a critical operation can instruct the scheduler to temporarily treat it as a cooperative thread by calling k_sched_lock(). This prevents other threads from interfering while the critical operation is being performed.

Once the critical operation is complete the preemptible thread must call k_sched_unlock() to restore its normal, preemptible status.

If a thread calls k_sched_lock() and subsequently performs an action that makes it unready, the scheduler will switch the locking thread out and allow other threads to execute. When the locking thread again becomes the current thread, its non-preemptible status is maintained.

Note

Locking out the scheduler is a more efficient way for a preemptible thread to inhibit preemption than changing its priority level to a negative value.

Meta-IRQ Priorities

When enabled (see CONFIG_NUM_METAIRQ_PRIORITIES), there is a special subclass of cooperative priorities at the highest (numerically lowest) end of the priority space: meta-IRQ threads. These are scheduled according to their normal priority, but also have the special ability to preempt all other threads (and other meta-irq threads) at lower priorities, even if those threads are cooperative and/or have taken a scheduler lock.

This behavior makes the act of unblocking a meta-IRQ thread (by any means, e.g. creating it, calling k_sem_give(), etc.) into the equivalent of a synchronous system call when done by a lower priority thread, or an ARM-like “pended IRQ” when done from true interrupt context. The intent is that this feature will be used to implement interrupt “bottom half” processing and/or “tasklet” features in driver subsystems. The thread, once woken, will be guaranteed to run before the current CPU returns into application code.

Unlike similar features in other OSes, meta-IRQ threads are true threads and run on their own stack (which much be allocated normally), not the per-CPU interrupt stack. Design work to enable the use of the IRQ stack on supported architectures is pending.

Note that because this breaks the promise made to cooperative threads by the Zephyr API (namely that the OS won’t schedule other thread until the current thread deliberately blocks), it should be used only with great care from application code. These are not simply very high priority threads and should not be used as such.

Thread Sleeping

A thread can call k_sleep() to delay its processing for a specified time period. During the time the thread is sleeping the CPU is relinquished to allow other ready threads to execute. Once the specified delay has elapsed the thread becomes ready and is eligible to be scheduled once again.

A sleeping thread can be woken up prematurely by another thread using k_wakeup(). This technique can sometimes be used to permit the secondary thread to signal the sleeping thread that something has occurred without requiring the threads to define a kernel synchronization object, such as a semaphore. Waking up a thread that is not sleeping is allowed, but has no effect.

Busy Waiting

A thread can call k_busy_wait() to perform a busy wait that delays its processing for a specified time period without relinquishing the CPU to another ready thread.

A busy wait is typically used instead of thread sleeping when the required delay is too short to warrant having the scheduler context switch from the current thread to another thread and then back again.

Suggested Uses

Use cooperative threads for device drivers and other performance-critical work.

Use cooperative threads to implement mutually exclusion without the need for a kernel object, such as a mutex.

Use preemptive threads to give priority to time-sensitive processing over less time-sensitive processing.

Custom Data

A thread’s custom data is a 32-bit, thread-specific value that may be used by an application for any purpose.

Concepts

Every thread has a 32-bit custom data area. The custom data is accessible only by the thread itself, and may be used by the application for any purpose it chooses. The default custom data for a thread is zero.

Note

Custom data support is not available to ISRs because they operate within a single shared kernel interrupt handling context.

Implementation

Using Custom Data

By default, thread custom data support is disabled. The configuration option CONFIG_THREAD_CUSTOM_DATA can be used to enable support.

The k_thread_custom_data_set() and k_thread_custom_data_get() functions are used to write and read a thread’s custom data, respectively. A thread can only access its own custom data, and not that of another thread.

The following code uses the custom data feature to record the number of times each thread calls a specific routine.

Note

Obviously, only a single routine can use this technique, since it monopolizes the use of the custom data feature.

int call_tracking_routine(void)
{
    u32_t call_count;

    if (k_is_in_isr()) {
        /* ignore any call made by an ISR */
    } else {
        call_count = (u32_t)k_thread_custom_data_get();
        call_count++;
        k_thread_custom_data_set((void *)call_count);
    }

    /* do rest of routine's processing */
    ...
}

Suggested Uses

Use thread custom data to allow a routine to access thread-specific information, by using the custom data as a pointer to a data structure owned by the thread.

System Threads

A system thread is a thread that the kernel spawns automatically during system initialization.

Concepts

The kernel spawns the following system threads.

Main thread

This thread performs kernel initialization, then calls the application’s main() function (if one is defined).

By default, the main thread uses the highest configured preemptible thread priority (i.e. 0). If the kernel is not configured to support preemptible threads, the main thread uses the lowest configured cooperative thread priority (i.e. -1).

The main thread is an essential thread while it is performing kernel initialization or executing the application’s main() function; this means a fatal system error is raised if the thread aborts. If main() is not defined, or if it executes and then does a normal return, the main thread terminates normally and no error is raised.

Idle thread

This thread executes when there is no other work for the system to do. If possible, the idle thread activates the board’s power management support to save power; otherwise, the idle thread simply performs a “do nothing” loop. The idle thread remains in existence as long as the system is running and never terminates.

The idle thread always uses the lowest configured thread priority. If this makes it a cooperative thread, the idle thread repeatedly yields the CPU to allow the application’s other threads to run when they need to.

The idle thread is an essential thread, which means a fatal system error is raised if the thread aborts.

Additional system threads may also be spawned, depending on the kernel and board configuration options specified by the application. For example, enabling the system workqueue spawns a system thread that services the work items submitted to it. (See Workqueue Threads.)

Implementation

Writing a main() function

An application-supplied main() function begins executing once kernel initialization is complete. The kernel does not pass any arguments to the function.

The following code outlines a trivial main() function. The function used by a real application can be as complex as needed.

void main(void)
{
    /* initialize a semaphore */
    ...

    /* register an ISR that gives the semaphore */
    ...

    /* monitor the semaphore forever */
    while (1) {
        /* wait for the semaphore to be given by the ISR */
        ...
        /* do whatever processing is now needed */
        ...
    }
}

Suggested Uses

Use the main thread to perform thread-based processing in an application that only requires a single thread, rather than defining an additional application-specific thread.

Workqueue Threads

A workqueue is a kernel object that uses a dedicated thread to process work items in a first in, first out manner. Each work item is processed by calling the function specified by the work item. A workqueue is typically used by an ISR or a high-priority thread to offload non-urgent processing to a lower-priority thread so it does not impact time-sensitive processing.

Concepts

Any number of workqueues can be defined. Each workqueue is referenced by its memory address.

A workqueue has the following key properties:

  • A queue of work items that have been added, but not yet processed.
  • A thread that processes the work items in the queue. The priority of the thread is configurable, allowing it to be either cooperative or preemptive as required.

A workqueue must be initialized before it can be used. This sets its queue to empty and spawns the workqueue’s thread.

Work Item Lifecycle

Any number of work items can be defined. Each work item is referenced by its memory address.

A work item has the following key properties:

  • A handler function, which is the function executed by the workqueue’s thread when the work item is processed. This function accepts a single argument, which is the address of the work item itself.
  • A pending flag, which is used by the kernel to signify that the work item is currently a member of a workqueue’s queue.
  • A queue link, which is used by the kernel to link a pending work item to the next pending work item in a workqueue’s queue.

A work item must be initialized before it can be used. This records the work item’s handler function and marks it as not pending.

A work item may be submitted to a workqueue by an ISR or a thread. Submitting a work item appends the work item to the workqueue’s queue. Once the workqueue’s thread has processed all of the preceding work items in its queue the thread will remove a pending work item from its queue and invoke the work item’s handler function. Depending on the scheduling priority of the workqueue’s thread, and the work required by other items in the queue, a pending work item may be processed quickly or it may remain in the queue for an extended period of time.

A handler function can utilize any kernel API available to threads. However, operations that are potentially blocking (e.g. taking a semaphore) must be used with care, since the workqueue cannot process subsequent work items in its queue until the handler function finishes executing.

The single argument that is passed to a handler function can be ignored if it is not required. If the handler function requires additional information about the work it is to perform, the work item can be embedded in a larger data structure. The handler function can then use the argument value to compute the address of the enclosing data structure, and thereby obtain access to the additional information it needs.

A work item is typically initialized once and then submitted to a specific workqueue whenever work needs to be performed. If an ISR or a thread attempts to submit a work item that is already pending, the work item is not affected; the work item remains in its current place in the workqueue’s queue, and the work is only performed once.

A handler function is permitted to re-submit its work item argument to the workqueue, since the work item is no longer pending at that time. This allows the handler to execute work in stages, without unduly delaying the processing of other work items in the workqueue’s queue.

Important

A pending work item must not be altered until the item has been processed by the workqueue thread. This means a work item must not be re-initialized while it is pending. Furthermore, any additional information the work item’s handler function needs to perform its work must not be altered until the handler function has finished executing.

Delayed Work

An ISR or a thread may need to schedule a work item that is to be processed only after a specified period of time, rather than immediately. This can be done by submitting a delayed work item to a workqueue, rather than a standard work item.

A delayed work item is a standard work item that has the following added properties:

  • A delay specifying the time interval to wait before the work item is actually submitted to a workqueue’s queue.
  • A workqueue indicator that identifies the workqueue the work item is to be submitted to.

A delayed work item is initialized and submitted to a workqueue in a similar manner to a standard work item, although different kernel APIs are used. When the submit request is made the kernel initiates a timeout mechanism that is triggered after the specified delay has elapsed. Once the timeout occurs the kernel submits the delayed work item to the specified workqueue, where it remains pending until it is processed in the standard manner.

An ISR or a thread may cancel a delayed work item it has submitted, providing the work item’s timeout is still counting down. The work item’s timeout is aborted and the specified work is not performed.

Attempting to cancel a delayed work item once its timeout has expired has no effect on the work item; the work item remains pending in the workqueue’s queue, unless the work item has already been removed and processed by the workqueue’s thread. Consequently, once a work item’s timeout has expired the work item is always processed by the workqueue and cannot be canceled.

System Workqueue

The kernel defines a workqueue known as the system workqueue, which is available to any application or kernel code that requires workqueue support. The system workqueue is optional, and only exists if the application makes use of it.

Important

Additional workqueues should only be defined when it is not possible to submit new work items to the system workqueue, since each new workqueue incurs a significant cost in memory footprint. A new workqueue can be justified if it is not possible for its work items to co-exist with existing system workqueue work items without an unacceptable impact; for example, if the new work items perform blocking operations that would delay other system workqueue processing to an unacceptable degree.

Implementation

Defining a Workqueue

A workqueue is defined using a variable of type struct k_work_q. The workqueue is initialized by defining the stack area used by its thread and then calling k_work_q_start(). The stack area must be defined using K_THREAD_STACK_DEFINE to ensure it is properly set up in memory.

The following code defines and initializes a workqueue.

#define MY_STACK_SIZE 512
#define MY_PRIORITY 5

K_THREAD_STACK_DEFINE(my_stack_area, MY_STACK_SIZE);

struct k_work_q my_work_q;

k_work_q_start(&my_work_q, my_stack_area,
               K_THREAD_STACK_SIZEOF(my_stack_area), MY_PRIORITY);

Submitting a Work Item

A work item is defined using a variable of type struct k_work. It must then be initialized by calling k_work_init().

An initialized work item can be submitted to the system workqueue by calling k_work_submit(), or to a specified workqueue by calling k_work_submit_to_queue().

The following code demonstrates how an ISR can offload the printing of error messages to the system workqueue. Note that if the ISR attempts to resubmit the work item while it is still pending, the work item is left unchanged and the associated error message will not be printed.

struct device_info {
    struct k_work work;
    char name[16]
} my_device;

void my_isr(void *arg)
{
    ...
    if (error detected) {
        k_work_submit(&my_device.work);
    }
    ...
}

void print_error(struct k_work *item)
{
    struct device_info *the_device =
        CONTAINER_OF(item, struct device_info, work);
    printk("Got error on device %s\n", the_device->name);
}

/* initialize name info for a device */
strcpy(my_device.name, "FOO_dev");

/* initialize work item for printing device's error messages */
k_work_init(&my_device.work, print_error);

/* install my_isr() as interrupt handler for the device (not shown) */
...

Submitting a Delayed Work Item

A delayed work item is defined using a variable of type struct k_delayed_work. It must then be initialized by calling k_delayed_work_init().

An initialized delayed work item can be submitted to the system workqueue by calling k_delayed_work_submit(), or to a specified workqueue by calling k_delayed_work_submit_to_queue(). A delayed work item that has been submitted but not yet consumed by its workqueue can be canceled by calling k_delayed_work_cancel().

Suggested Uses

Use the system workqueue to defer complex interrupt-related processing from an ISR to a cooperative thread. This allows the interrupt-related processing to be done promptly without compromising the system’s ability to respond to subsequent interrupts, and does not require the application to define an additional thread to do the processing.

API Reference

group thread_apis

Defines

K_ESSENTIAL

system thread that must not abort

K_USER

user mode thread

This thread has dropped from supervisor mode to user mode and consequently has additional restrictions

K_INHERIT_PERMS

Inherit Permissions.

Indicates that the thread being created should inherit all kernel object permissions from the thread that created it. No effect if CONFIG_USERSPACE is not enabled.

k_thread_access_grant(thread, ...)

Grant a thread access to a set of kernel objects.

This is a convenience function. For the provided thread, grant access to the remaining arguments, which must be pointers to kernel objects.

The thread object must be initialized (i.e. running). The objects don’t need to be. Note that NULL shouldn’t be passed as an argument.

Parameters
  • thread: Thread to grant access to objects
  • ...: list of kernel object pointers

K_THREAD_DEFINE(name, stack_size, entry, p1, p2, p3, prio, options, delay)

Statically define and initialize a thread.

The thread may be scheduled for immediate execution or a delayed start.

Thread options are architecture-specific, and can include K_ESSENTIAL, K_FP_REGS, and K_SSE_REGS. Multiple options may be specified by separating them using “|” (the logical OR operator).

The ID of the thread can be accessed using:

extern const k_tid_t <name>; 

Parameters
  • name: Name of the thread.
  • stack_size: Stack size in bytes.
  • entry: Thread entry function.
  • p1: 1st entry point parameter.
  • p2: 2nd entry point parameter.
  • p3: 3rd entry point parameter.
  • prio: Thread priority.
  • options: Thread options.
  • delay: Scheduling delay (in milliseconds), or K_NO_WAIT (for no delay).

Z_WORK_INITIALIZER(work_handler)
K_WORK_INITIALIZER
K_WORK_DEFINE(work, work_handler)

Initialize a statically-defined work item.

This macro can be used to initialize a statically-defined workqueue work item, prior to its first use. For example,

static K_WORK_DEFINE(<work>, <work_handler>); 

Parameters
  • work: Symbol name for work item object
  • work_handler: Function to invoke each time work item is processed.

Typedefs

typedef void (*k_thread_user_cb_t)(const struct k_thread *thread, void *user_data)
typedef k_work_handler_t

Work item handler function type.

A work item’s handler function is executed by a workqueue’s thread when the work item is processed by the workqueue.

Return
N/A
Parameters
  • work: Address of the work item.

Functions

void k_thread_foreach(k_thread_user_cb_t user_cb, void *user_data)

Iterate over all the threads in the system.

This routine iterates over all the threads in the system and calls the user_cb function for each thread.

Note
CONFIG_THREAD_MONITOR must be set for this function to be effective. Also this API uses irq_lock to protect the _kernel.threads list which means creation of new threads and terminations of existing threads are blocked until this API returns.
Return
N/A
Parameters
  • user_cb: Pointer to the user callback function.
  • user_data: Pointer to user data.

k_tid_t k_thread_create(struct k_thread *new_thread, k_thread_stack_t *stack, size_t stack_size, k_thread_entry_t entry, void *p1, void *p2, void *p3, int prio, u32_t options, s32_t delay)

Create a thread.

This routine initializes a thread, then schedules it for execution.

The new thread may be scheduled for immediate execution or a delayed start. If the newly spawned thread does not have a delayed start the kernel scheduler may preempt the current thread to allow the new thread to execute.

Thread options are architecture-specific, and can include K_ESSENTIAL, K_FP_REGS, and K_SSE_REGS. Multiple options may be specified by separating them using “|” (the logical OR operator).

Historically, users often would use the beginning of the stack memory region to store the struct k_thread data, although corruption will occur if the stack overflows this region and stack protection features may not detect this situation.

Return
ID of new thread.
Parameters
  • new_thread: Pointer to uninitialized struct k_thread
  • stack: Pointer to the stack space.
  • stack_size: Stack size in bytes.
  • entry: Thread entry function.
  • p1: 1st entry point parameter.
  • p2: 2nd entry point parameter.
  • p3: 3rd entry point parameter.
  • prio: Thread priority.
  • options: Thread options.
  • delay: Scheduling delay (in milliseconds), or K_NO_WAIT (for no delay).

FUNC_NORETURN void k_thread_user_mode_enter(k_thread_entry_t entry, void *p1, void *p2, void *p3)

Drop a thread’s privileges permanently to user mode.

Parameters
  • entry: Function to start executing from
  • p1: 1st entry point parameter
  • p2: 2nd entry point parameter
  • p3: 3rd entry point parameter

static void k_thread_resource_pool_assign(struct k_thread *thread, struct k_mem_pool *pool)

Assign a resource memory pool to a thread.

By default, threads have no resource pool assigned unless their parent thread has a resource pool, in which case it is inherited. Multiple threads may be assigned to the same memory pool.

Changing a thread’s resource pool will not migrate allocations from the previous pool.

Parameters
  • thread: Target thread to assign a memory pool for resource requests, or NULL if the thread should no longer have a memory pool.
  • pool: Memory pool to use for resources.

s32_t k_sleep(s32_t duration)

Put the current thread to sleep.

This routine puts the current thread to sleep for duration milliseconds.

Return
Zero if the requested time has elapsed or the number of milliseconds left to sleep, if thread was woken up by k_wakeup call.
Parameters
  • duration: Number of milliseconds to sleep.

void k_busy_wait(u32_t usec_to_wait)

Cause the current thread to busy wait.

This routine causes the current thread to execute a “do nothing” loop for usec_to_wait microseconds.

Return
N/A

void k_yield(void)

Yield the current thread.

This routine causes the current thread to yield execution to another thread of the same or higher priority. If there are no other ready threads of the same or higher priority, the routine returns immediately.

Return
N/A

void k_wakeup(k_tid_t thread)

Wake up a sleeping thread.

This routine prematurely wakes up thread from sleeping.

If thread is not currently sleeping, the routine has no effect.

Return
N/A
Parameters
  • thread: ID of thread to wake.

k_tid_t k_current_get(void)

Get thread ID of the current thread.

Return
ID of current thread.

void k_thread_abort(k_tid_t thread)

Abort a thread.

This routine permanently stops execution of thread. The thread is taken off all kernel queues it is part of (i.e. the ready queue, the timeout queue, or a kernel object wait queue). However, any kernel resources the thread might currently own (such as mutexes or memory blocks) are not released. It is the responsibility of the caller of this routine to ensure all necessary cleanup is performed.

Return
N/A
Parameters
  • thread: ID of thread to abort.

void k_thread_start(k_tid_t thread)

Start an inactive thread.

If a thread was created with K_FOREVER in the delay parameter, it will not be added to the scheduling queue until this function is called on it.

Parameters
  • thread: thread to start

int k_thread_priority_get(k_tid_t thread)

Get a thread’s priority.

This routine gets the priority of thread.

Return
Priority of thread.
Parameters
  • thread: ID of thread whose priority is needed.

void k_thread_priority_set(k_tid_t thread, int prio)

Set a thread’s priority.

This routine immediately changes the priority of thread.

Rescheduling can occur immediately depending on the priority thread is set to:

  • If its priority is raised above the priority of the caller of this function, and the caller is preemptible, thread will be scheduled in.
  • If the caller operates on itself, it lowers its priority below that of other threads in the system, and the caller is preemptible, the thread of highest priority will be scheduled in.

Priority can be assigned in the range of -CONFIG_NUM_COOP_PRIORITIES to CONFIG_NUM_PREEMPT_PRIORITIES-1, where -CONFIG_NUM_COOP_PRIORITIES is the highest priority.

Warning
Changing the priority of a thread currently involved in mutex priority inheritance may result in undefined behavior.
Return
N/A
Parameters
  • thread: ID of thread whose priority is to be set.
  • prio: New priority.

void k_thread_deadline_set(k_tid_t thread, int deadline)

Set deadline expiration time for scheduler.

This sets the “deadline” expiration as a time delta from the current time, in the same units used by k_cycle_get_32(). The scheduler (when deadline scheduling is enabled) will choose the next expiring thread when selecting between threads at the same static priority. Threads at different priorities will be scheduled according to their static priority.

Note
Deadlines that are negative (i.e. in the past) are still seen as higher priority than others, even if the thread has “finished” its work. If you don’t want it scheduled anymore, you have to reset the deadline into the future, block/pend the thread, or modify its priority with k_thread_priority_set().
Note
Despite the API naming, the scheduler makes no guarantees the the thread WILL be scheduled within that deadline, nor does it take extra metadata (like e.g. the “runtime” and “period” parameters in Linux sched_setattr()) that allows the kernel to validate the scheduling for achievability. Such features could be implemented above this call, which is simply input to the priority selection logic.
Parameters
  • thread: A thread on which to set the deadline
  • deadline: A time delta, in cycle units

void k_thread_suspend(k_tid_t thread)

Suspend a thread.

This routine prevents the kernel scheduler from making thread the current thread. All other internal operations on thread are still performed; for example, any timeout it is waiting on keeps ticking, kernel objects it is waiting on are still handed to it, etc.

If thread is already suspended, the routine has no effect.

Return
N/A
Parameters
  • thread: ID of thread to suspend.

void k_thread_resume(k_tid_t thread)

Resume a suspended thread.

This routine allows the kernel scheduler to make thread the current thread, when it is next eligible for that role.

If thread is not currently suspended, the routine has no effect.

Return
N/A
Parameters
  • thread: ID of thread to resume.

void k_sched_time_slice_set(s32_t slice, int prio)

Set time-slicing period and scope.

This routine specifies how the scheduler will perform time slicing of preemptible threads.

To enable time slicing, slice must be non-zero. The scheduler ensures that no thread runs for more than the specified time limit before other threads of that priority are given a chance to execute. Any thread whose priority is higher than prio is exempted, and may execute as long as desired without being preempted due to time slicing.

Time slicing only limits the maximum amount of time a thread may continuously execute. Once the scheduler selects a thread for execution, there is no minimum guaranteed time the thread will execute before threads of greater or equal priority are scheduled.

When the current thread is the only one of that priority eligible for execution, this routine has no effect; the thread is immediately rescheduled after the slice period expires.

To disable timeslicing, set both slice and prio to zero.

Return
N/A
Parameters
  • slice: Maximum time slice length (in milliseconds).
  • prio: Highest thread priority level eligible for time slicing.

void k_sched_lock(void)

Lock the scheduler.

This routine prevents the current thread from being preempted by another thread by instructing the scheduler to treat it as a cooperative thread. If the thread subsequently performs an operation that makes it unready, it will be context switched out in the normal manner. When the thread again becomes the current thread, its non-preemptible status is maintained.

This routine can be called recursively.

Note
k_sched_lock() and k_sched_unlock() should normally be used when the operation being performed can be safely interrupted by ISRs. However, if the amount of processing involved is very small, better performance may be obtained by using irq_lock() and irq_unlock().
Return
N/A

void k_sched_unlock(void)

Unlock the scheduler.

This routine reverses the effect of a previous call to k_sched_lock(). A thread must call the routine once for each time it called k_sched_lock() before the thread becomes preemptible.

Return
N/A

void k_thread_custom_data_set(void *value)

Set current thread’s custom data.

This routine sets the custom data for the current thread to @ value.

Custom data is not used by the kernel itself, and is freely available for a thread to use as it sees fit. It can be used as a framework upon which to build thread-local storage.

Return
N/A
Parameters
  • value: New custom data value.

void *k_thread_custom_data_get(void)

Get current thread’s custom data.

This routine returns the custom data for the current thread.

Return
Current custom data value.

int k_thread_name_set(k_tid_t thread_id, const char *value)

Set current thread name.

Set the name of the thread to be used when THREAD_MONITOR is enabled for tracing and debugging.

Parameters
  • thread_id: Thread to set name, or NULL to set the current thread
  • value: Name string
Return Value
  • 0: on success
  • -EFAULT: Memory access error with supplied string
  • -ENOSYS: Thread name configuration option not enabled
  • -EINVAL: Thread name too long

const char *k_thread_name_get(k_tid_t thread_id)

Get thread name.

Get the name of a thread

Parameters
  • thread_id: Thread ID
Return Value
  • Thread: name, or NULL if configuration not enabled

int k_thread_name_copy(k_tid_t thread_id, char *buf, size_t size)

Copy the thread name into a supplied buffer.

Parameters
  • thread_id: Thread to obtain name information
  • buf: Destination buffer
  • size: Destinatiomn buffer size
Return Value
  • -ENOSPC: Destination buffer too small
  • -EFAULT: Memory access error
  • -ENOSYS: Thread name feature not enabled
  • 0: Success

static void k_work_init(struct k_work *work, k_work_handler_t handler)

Initialize a work item.

This routine initializes a workqueue work item, prior to its first use.

Return
N/A
Parameters
  • work: Address of work item.
  • handler: Function to invoke each time work item is processed.

static void k_work_submit_to_queue(struct k_work_q *work_q, struct k_work *work)

Submit a work item.

This routine submits work item work to be processed by workqueue work_q. If the work item is already pending in the workqueue’s queue as a result of an earlier submission, this routine has no effect on the work item. If the work item has already been processed, or is currently being processed, its work is considered complete and the work item can be resubmitted.

Warning
A submitted work item must not be modified until it has been processed by the workqueue.
Note
Can be called by ISRs.
Return
N/A
Parameters
  • work_q: Address of workqueue.
  • work: Address of work item.

static int k_work_submit_to_user_queue(struct k_work_q *work_q, struct k_work *work)

Submit a work item to a user mode workqueue.

Submits a work item to a workqueue that runs in user mode. A temporary memory allocation is made from the caller’s resource pool which is freed once the worker thread consumes the k_work item. The workqueue thread must have memory access to the k_work item being submitted. The caller must have permission granted on the work_q parameter’s queue object.

Otherwise this works the same as k_work_submit_to_queue().

Note
Can be called by ISRs.
Parameters
  • work_q: Address of workqueue.
  • work: Address of work item.
Return Value
  • -EBUSY: if the work item was already in some workqueue
  • -ENOMEM: if no memory for thread resource pool allocation
  • 0: Success

static bool k_work_pending(struct k_work *work)

Check if a work item is pending.

This routine indicates if work item work is pending in a workqueue’s queue.

Note
Can be called by ISRs.
Return
true if work item is pending, or false if it is not pending.
Parameters
  • work: Address of work item.

void k_work_q_start(struct k_work_q *work_q, k_thread_stack_t *stack, size_t stack_size, int prio)

Start a workqueue.

This routine starts workqueue work_q. The workqueue spawns its work processing thread, which runs forever.

Return
N/A
Parameters
  • work_q: Address of workqueue.
  • stack: Pointer to work queue thread’s stack space, as defined by K_THREAD_STACK_DEFINE()
  • stack_size: Size of the work queue thread’s stack (in bytes), which should either be the same constant passed to K_THREAD_STACK_DEFINE() or the value of K_THREAD_STACK_SIZEOF().
  • prio: Priority of the work queue’s thread.

void k_work_q_user_start(struct k_work_q *work_q, k_thread_stack_t *stack, size_t stack_size, int prio)

Start a workqueue in user mode.

This works identically to k_work_q_start() except it is callable from user mode, and the worker thread created will run in user mode. The caller must have permissions granted on both the work_q parameter’s thread and queue objects, and the same restrictions on priority apply as k_thread_create().

Return
N/A
Parameters
  • work_q: Address of workqueue.
  • stack: Pointer to work queue thread’s stack space, as defined by K_THREAD_STACK_DEFINE()
  • stack_size: Size of the work queue thread’s stack (in bytes), which should either be the same constant passed to K_THREAD_STACK_DEFINE() or the value of K_THREAD_STACK_SIZEOF().
  • prio: Priority of the work queue’s thread.

void k_delayed_work_init(struct k_delayed_work *work, k_work_handler_t handler)

Initialize a delayed work item.

This routine initializes a workqueue delayed work item, prior to its first use.

Return
N/A
Parameters
  • work: Address of delayed work item.
  • handler: Function to invoke each time work item is processed.

int k_delayed_work_submit_to_queue(struct k_work_q *work_q, struct k_delayed_work *work, s32_t delay)

Submit a delayed work item.

This routine schedules work item work to be processed by workqueue work_q after a delay of delay milliseconds. The routine initiates an asynchronous countdown for the work item and then returns to the caller. Only when the countdown completes is the work item actually submitted to the workqueue and becomes pending.

Submitting a previously submitted delayed work item that is still counting down cancels the existing submission and restarts the countdown using the new delay. Note that this behavior is inherently subject to race conditions with the pre-existing timeouts and work queue, so care must be taken to synchronize such resubmissions externally.

Warning
A delayed work item must not be modified until it has been processed by the workqueue.
Note
Can be called by ISRs.
Parameters
  • work_q: Address of workqueue.
  • work: Address of delayed work item.
  • delay: Delay before submitting the work item (in milliseconds).
Return Value
  • 0: Work item countdown started.
  • -EINVAL: Work item is being processed or has completed its work.
  • -EADDRINUSE: Work item is pending on a different workqueue.

int k_delayed_work_cancel(struct k_delayed_work *work)

Cancel a delayed work item.

This routine cancels the submission of delayed work item work. A delayed work item can only be canceled while its countdown is still underway.

Note
Can be called by ISRs.
Note
The result of calling this on a k_delayed_work item that has not been submitted (i.e. before the return of the k_delayed_work_submit_to_queue() call) is undefined.
Parameters
  • work: Address of delayed work item.
Return Value
  • 0: Work item countdown canceled.
  • -EINVAL: Work item is being processed or has completed its work.

static void k_work_submit(struct k_work *work)

Submit a work item to the system workqueue.

This routine submits work item work to be processed by the system workqueue. If the work item is already pending in the workqueue’s queue as a result of an earlier submission, this routine has no effect on the work item. If the work item has already been processed, or is currently being processed, its work is considered complete and the work item can be resubmitted.

Warning
Work items submitted to the system workqueue should avoid using handlers that block or yield since this may prevent the system workqueue from processing other work items in a timely manner.
Note
Can be called by ISRs.
Return
N/A
Parameters
  • work: Address of work item.

static int k_delayed_work_submit(struct k_delayed_work *work, s32_t delay)

Submit a delayed work item to the system workqueue.

This routine schedules work item work to be processed by the system workqueue after a delay of delay milliseconds. The routine initiates an asynchronous countdown for the work item and then returns to the caller. Only when the countdown completes is the work item actually submitted to the workqueue and becomes pending.

Submitting a previously submitted delayed work item that is still counting down cancels the existing submission and restarts the countdown using the new delay. If the work item is currently pending on the workqueue’s queue because the countdown has completed it is too late to resubmit the item, and resubmission fails without impacting the work item. If the work item has already been processed, or is currently being processed, its work is considered complete and the work item can be resubmitted.

Warning
Work items submitted to the system workqueue should avoid using handlers that block or yield since this may prevent the system workqueue from processing other work items in a timely manner.
Note
Can be called by ISRs.
Parameters
  • work: Address of delayed work item.
  • delay: Delay before submitting the work item (in milliseconds).
Return Value
  • 0: Work item countdown started.
  • -EINVAL: Work item is being processed or has completed its work.
  • -EADDRINUSE: Work item is pending on a different workqueue.

static s32_t k_delayed_work_remaining_get(struct k_delayed_work *work)

Get time remaining before a delayed work gets scheduled.

This routine computes the (approximate) time remaining before a delayed work gets executed. If the delayed work is not waiting to be scheduled, it returns zero.

Return
Remaining time (in milliseconds).
Parameters
  • work: Delayed work item.

struct k_thread
#include <kernel.h>

Thread Structure