Basics-Wrapup

View on GitHub

RTOS - Real Time Operating System

why using RTOS ?

but also we need to consider some points before using RTOS

Busy-Wait Vs Thread Scheduling

Busy-Wait VS TaskScheduling

Parallel vs Concurrent

Foreground Vs Background

ForeGround Vs Background

Execution Thread

Thread Control Block

TCP Diagram

Thread States

Create Thread API

Create Thread API

thread Lists

Arm CM4 OS SUPPORT FEATURES

PendSV for Context Switching

PENDSV FOR CONTEXT SWITCHING

PendSV to partition an interrupt

PendSV to partition an interrupt

Till now we know the overview of our OS and how it will work now let’s get into some code

Architectural Steps Toward OS creation

1. Threads Create and TCP

2. Scheduling and Mutual Execlusion

3. Mutex, Semaphore and MailBox

4. Timer Support

5 other Features and Events

Now let’s start with our kit stm32f1 and let’s build and debug c code alongside with assembly code and for some architectural organization notes we will create a new folder inside aour advanced project alongside with inc and src inside core folder and inside this file we will create a inc and src folder for our rtos and assebly files will be longside with .c files.

Threads Create and TCP

let’s start by our structures we will use for each event-list and thread

main Structures

  typedef struct thread_t
  {
    uint32_t pStackPointer;        /**< Stack pointer */
    uint32_t priority;             /**< Thread priority */
    RTOS_listItem_t item;          /**< List item of this thread */
  } RTOS_thread_t;
  1. TCP pointer which is 16 word starting at the top of the stack of this thread

  2. thread priority

  3. thread item (not item pointer :D) which will point to the thread item structure which will be linked to a list and point to a pointer to function of our TASKx Thread and next and prev items in the list we can consider this as a pointer to a list items of a circular-doubly linked list

  typedef struct
  {
    uint64_t stack[THREAD_STACK_SIZE];  /**< Thread stack */
  } RTOS_stack_t;
  1. a thread stack with THREAD_STACK_SIZE of 1024 place of 64bit type means 1024 double word means 1024*8 = 8Kbytes for each thread

note for our stm32f1 we can’t use more than 7 threads thus we need to set THREAD_PRIORITY_LEVELS to 2

if we need to use more we need to decrease stack size

  struct listItem_t
  {
    struct listItem_t * pNext;     /**< Pointer to the next list item */
    struct listItem_t * pPrev;     /**< Pointer to the previous list item */
    void * pThread;                /**< Pointer to the thread of the item */
    void * pList;                  /**< Pointer to the list in which the item it */
  };
  typedef struct listItem_t RTOS_listItem_t;
  1. pNext pointer pointeing to next item in the list
  2. pPrev pointer pointeing to previous item in the list
  3. pointer to the thread Task Function defined in the main.c as application Task
  4. pointer to list where the item exists (for our RTOS may be Ready ,Waiting, or suspended list and the running will be only one item not a list)
typedef struct
{
  RTOS_listItem_t * pNext;     /**< Pointer to the next list item */
  RTOS_listItem_t * pPrev;     /**< Pointer to the previous list item */
} RTOS_listEnd_t;

thread end item to save memory for each list head by creating an end type and not depending on listItem_t with null value for pThread :D (optimization issue here notice below)

no need for list pointer because this item points to items assigned to a list and only used to interface list start to list end which can be ignored to save memory for each list.

  1. pNext pointer pointeing to next item in the list
  2. pPrev pointer pointeing to previous item in the list
  typedef struct
  {
    uint32_t numOfItems;           /**< Number of threads items in the list */
    RTOS_listItem_t * pIndex;        /**< Pointer to the current item */
    RTOS_listEnd_t listEnd;          /**< List end */
  } RTOS_list_t;
  1. number of items in this list
  2. pointer to current item in the list
  3. list end item

let’s see some examples by graph

empty list

one-item in the list

two-item in the list

then let’s get to our main functions

main Functions for Task Creation and Deletion

you can look at it’s implementation in file rtos_list.c

  1. RTOS_listInit function used to initialize each list with end item in it
  2. RTOS_listInsertEnd function used to add item at pIndex of the list
  3. RTOS_listRemove used to remove item from the list

functions of RTOS thread

first we need to create ready list which is an array of type list of priority levels size

and top priority item to indicate the current top priority exists we can consider this as a variable containing the current highest priority

  static RTOS_list_t readyList[THREAD_PRIORITY_LEVELS];
  
  static uint32_t currentTopPriority = (THREAD_PRIORITY_LEVELS - 1);

then we need to indicate a pointer to the curent running thread

  RTOS_thread_t * pRunningThread;

then we have 3 main thread functions

  1. RTOS_threadInitLists function this function initializes each list item by an empty list with endItem
  2. RTOS_threadCreate function

TCP inside stack allocated for a task

this function starts with validating it’s input values then - assign stack pointer of the thread which will be after thread TCP we can find this by knowing that we have stack size of 8kbytes and 18 word then thread-sp = sp + size8 - 184 - also TCP consists of exception return at the end to identify which stack is used msp or psp and return to FPU or not while context switching - also this function is going to be called from the SVC handler function

  1. RTOS_threadGetCurrentReady function this function is going to get the highest priority(lowest index in the ready list) thread in the ready list

some notice here for our RTOS


Scheduling and Mutual Execlusion

Priority based scheduler

non-Preemptive (COOPERATIVE) kernel

Non Preemptive Kernel

Preemptive Kernel

Preemptive Kernel

Start Scheduling function

this funciton will start execution of threads with help of svc call but first we need to create an idle thread and add it to the ready list to let the scheduler always running we need to at least have one dummy thread running called idel thread which is always located in ready list.

if all tasks get into waiting list which is not supposed to happen we will need to run the idel thread

here also we need to fetch the highest priority thread and set it to the running thread


Mutex, Semaphore and MailBox

Mutual Execlusion

MUTEX

Semaphore

semaphore is the same as mutex except that we didn’t check its binary value so on each give it will safely increment semaphore value while in take it will decrement it’s value and while its creation we can set its value and the same as moving it to the waiting list while there is a thread wants to take that semaphore while it is already taken or its value can’t be taken because of its condition which for us is that the semaphore is == 0 :D we can change that and pass it to our function but till now it is very enough then while giving this semaphore we retrieve the items in the waiting list of this semaphore to the ready list in order to retry taking the semaphore again :D

notice that we can’t nest different semaphores becasue the second one will have the control of the threads in the waiting list and first one will be a dummy semaphore and we can’t depend on it, because it will be detached from the real thread scope

also notice that we can depend on that the SVC interrupt have the highest priority so no interrupt will preempt this operation and no one can racing this one while it is running but if we used NVIC to reduce its priority then we will found an issue and we will need to make it a critical section by disabling interrupts before and after setting mutex or semaphore value but that will issue a problem which causing the interrupt to be delayed which may be a critical event so this is not an efficient one and hence we really need to use exclusive operations in order to add dynamic configuration for our RTOS which enables increasing and decreasing priority of svc interrupt according to our application

Mailbox

it is nothing more than a QUEUE but it’s ENQUEUE and DEQUEUE operations need to happen in a safe manner so we can make this operation inside mutex block or mutex like functions in order to notice each thread with a live version of queue buffer and avoid racing on the buffer also when state-machine programming come into the picture we can use maibox for message notification or event message that moves the operation between different states

Notice we need to avoid nesting mutexes ,semaphores and maiboxes to avoid moving running thread into the last waiting list scope of the last called block(mutexes ,semaphores and maiboxes) which detaches scopes from its logical operation which make it with no effect or its effect will rely on other scopes so as functional component it will not have a concurrent functional operation so take care of this while implementing your app using this RTOS


Timer Support

why we need timer support?

Synchronization Events with timer support

MUTEX LOCK_WITH TIMER SUPPORT

RTOS Lists with Timer Support

RTOS LISTS WITH TIMER SUPPORT

thread Destroy

Key Notes

from those lists we notice that each thread may need to wait for a certain time on a certain resource so what we can do ?

can we move the same item to waiting and timer list ?

we shouldn’t do that because while any event have come wheather it is time event or mutex relase we can move this item to ready list but how can we move the same item to two different lists without modifying the list item structure ? becasue each item only points to one list

we can make or tie each thread to two list items generic_list_item and event_list_item and we can move generic_list_item between ready and timer list which is global to our RTOS structure and move event_list_item into local resource waiting list such as mutex, semaphore and mailbox and with the help of that structure we can add the same thread into two different lists event list and timer list which enables us to add timeout cabability to our events or waiting lists


Priority Inversion

Bounded Priority inversion

un Bounded Priority inversion


DeadLock

DeadLock Example

we can represent the circular wait (DEADLOCK) with Resource Allocation Graph method which is a method used to visualize resource allocations, and requests in the system where threads are shown with circules, resources are shown with rectangles and blacks dots represent mutex that can share the resource and if many dots found in the resource that mean we have a number of instances from a certain semaphore also arrow from a thread to a resource indicates an request or a wait and arrow from a resource to a thread indicates an allocation.

Resource allocation graph

some Examples

DeadLock Example with three tasks

Handling Deadlocks