The Micrium OS Kernel has a rich set of built-in instrumentation that collects real-time performance data. This data can be used to provide invaluable insight into your kernel‑based application, allowing you to have a better understanding of the run-time behavior of your system. Having this information readily available can, in some cases, uncover potential real-time programming errors and allow you to optimize your application. In this two-part series of posts, we will explore the statistics yielded by the kernel's instrumentation, and we'll also consider a unique way of visualizing this information.
Many IDEs provide, as part of their debugger, what is called a Kernel Awareness plug-in. Kernel Awareness allows you to see the status of certain kernel data structures (mostly tasks) using a tabular format. A notable problem with IDE-based Kernel Awareness plug-ins is that the information is displayed only when you stop the target (i.e. when you reach a breakpoint or when you step through code). This is quite useful and often sufficient if you are looking at such things as maximum stack usage or how often a task executed, but you only get a snapshot of an instant in time. This is similar to taking a picture versus watching a movie. However, there are situations where you simply cannot stop the target and examine kernel or other variables: engine control, conveyor belts, ECG monitoring, networking communications and more. In other words, those have to be monitored while the target is running.
Micrium offers a powerful tool called µC/Probe which is a target-agnostic, Windows-based application that allows you to display or change the value of variables on a running target with little or no CPU intervention. Although µC/Probe can be used with systems that do not incorporate Micrium's kernels, the tool includes Kernel Awareness screens which provide a ‘dashboard’ or ‘instrument panel’ for µC/OS-II, µC/OS-III and the Micrium OS Kernel. The ‘Task’ window of the Kernel-Awareness screen is shown in Figure 1.
Figure 1 – Micrium OS Kernel Awareness in µC/Probe
The Task Window is one of the many views into performance and status data collected by the Micrium OS Kernel and displayed by µC/Probe. Specifically, µC/Probe exposes status information for Semaphores, Mutexes, Event Flags, Message Queues, and more as shown in Figure 2.
Figure 2 – Additional views of the Micrium OS Kernel status in µC/Probe
With µC/Probe, you can display or change the value of any application variable (i.e. your variables), at run-time. Values can be represented numerically, using gauges or meters, bar graphs, charts, using virtual LEDs, on an Excel spreadsheet, using TreeView and through other graphical components. Kernel variables can be similarly accessed, and the Kernel Awareness screens provide a pre-populated interface for reading these variables. In this post and the next, we'll consider several of the fields in the 'Task' window of the Micrium OS Kernel Awareness screen. We'll also look into the instrumentation underlying the fields.
Each row in the 'Task' window represents a task managed by the Micrium OS Kernel. The task name is shown in the third column and is obviously quite useful to have as it associates the data in a row with a specific task that was created. The name is what was specified during the OSTaskCreate() call.
As you may know, a real-time kernel like the Micrium OS Kernel requires that you allocate a stack for each task in your application. The Micrium OS Kernel has an optional built-in task called the statistics task which runs every 100 ms (the frequency is configurable at compile-time) and calculates how much stack each task has used up. The information collected by the statistics task is stored in Task Control Blocks (or TCBs), and µC/Probe displays that information as shown in Figure 3.
The 3rd column indicates the total available stack size of each task. The value is in stack units so, to find out how many bytes the task used, you simply multiply the value shown by the size of a stack unit for the CPU architecture you use. In this case, a stack unit was 4 bytes wide so, for many of the tasks, the total available stack size is 400 bytes.
The 1st column (#Used) indicates worst case stack usage of the task (again in stack units).
The 2nd column deducts how many stack units are left from the total size and what’s used:
#Free = Size - #Used
Figure 3 – µC/Probe Task Stacks View
The bar graph is probably the most useful representation of stack usage since, at a glance, you can tell whether or not application tasks have sufficient stack space. In fact, the bar graph is color coded. If stack usage for a given task is below 70%, the bar graph is GREEN. If stack usage is between 70% and 90%, the bar graph is displayed in YELLOW. Finally, if the stack usage exceeds 90%, the bar graph changes to RED. You should thus consider increasing the size of stack for tasks that have YELLOW bar graphs and for sure increase the size for any tasks that have RED bar graphs.
Stack overflows are the number one issue you're likely to encounter when you develop kernel‑based applications. If a task's stack appears RED in µC/Probe, you stand a good chance of experiencing strange behaviors because an overflow might alter variables that are located just above the top of the stack (lower memory locations). To avoid such problems, you should increase the size of any RED stacks.
The Micrium OS Kernel computes overall CPU usage and updates this value every time the statistics task runs. µC/Probe displays this information within the Kernel Awareness window as shown in Figure 4.
Figure 4 – µC/Probe Global CPU Usage
The gauge shows two things, the current CPU usage as a percentage (needle) and the peak CPU usage (small RED triangle). As shown, peak CPU usage was a tad above 10%. The CPU usage of the idle task is not actually counted in the calculation; otherwise, the gauge would always show 100% and that would not be very useful. The ‘Total CPU Usage’ gauge actually considers the execution of tasks as well as ISRs during the measurement period.
The chart on the right actually shows CPU usage over the past 60 seconds. The chart is updated every second and scrolls to the left. The most recent CPU usage is thus displayed on the far right. The RED line shows peak CPU usage and the BLUE trace shows current CPU usage.
The Micrium OS Kernel also computes the execution time of each task. This is then used to figure out the relative CPU usage of each task. µC/Probe displays this information as a bar graph as shown in Figure 5.
The field at the bottom is the CPU usage of the Micrium OS Kernel’s idle task. Since the CPU of the depicted system is not overly busy, the idle task consumes over 90% of the CPU time. The idle task is typically a good place to add code to put the CPU in low power mode, as is often required for battery powered applications.
The BLACK vertical bar with the small triangle pointing upwards represents the peak CPU usage of that task. The peak usage is actually tracked by the Micrium OS Kernel and can be a useful indicator about the behavior of your tasks. The per-task CPU usage statistic gives you a good idea of where your CPU is spending its time. This can help confirm your expectations and possibly help you determine task priorities.
You should note that the CPU usage for each task also includes time spent in ISRs while the task is running. In other words, the Micrium OS Kernel doesn’t subtract out ISR time while a task is running as this would add too much overhead in the calculation.
Figure 5 – µC/Probe Per-Task CPU Usage
You will notice that, adding all the values in the figure yields a total of 97.06%. The reason for this is that µC/Probe doesn’t update all the values at exactly the same time.
As shown in Figure 6, each task contains a counter that keeps track of how often it actually had control of the CPU. This feature is helpful to see if a task executes as often as you expect.
A task with a count of zero (0) or not incrementing indicates that the task doesn’t get a chance to execute. This can be normal if the event that the task is waiting for never occurs. For example, a task may be waiting for an Ethernet packet to arrive. If the cable is not connected then the task counter for that task will not increment.
Another situation where the task context switch counter would not increment is as shown in Figure 7. Here, an ISR is posting messages to a task. Unfortunately, the ISR is posting to the wrong queue and thus the recipient never receives those messages.
Figure 6 – µC/Probe Per-Task CtxSwCtr
Figure 7 – ISR Posting to the Wrong Message Queue
Semaphores are typically used as a signaling mechanism to notify a task that an event occurred. An event can be generated by an ISR or another task.
The Micrium OS Kernel has a built-in semaphore for every task and thus an ISR or another task can directly signal a task. This greatly simplifies system design, reduces RAM footprint and is much more efficient than the conventional method of allocating a separate kernel object for this purpose.
Figure 8 shows that the task semaphore consists of a counter that indicates how often the task has been signaled since it was last pended on.
A zero value in the ‘Ctr’ column either indicates that the task semaphore was not signaled or, that the task received the signal and actually processed the event. A non-zero (and possibly incrementing) count would indicate that you are signaling faster than you can process the signals. You might need to increase the task priority of the signaled task or find out if there could be another reason.
The ‘Signal Time’ indicates how long it took (in microseconds) between the occurrence of the signal and when the task woke up to process it.
The ‘Signal Time (Max)’ is a peak detector of the ‘Signal Time’. In other words, this column shows the worst case response time for the signal.
Figure 8 – Task Semaphores
Message queues are typically used as a way to notify a task that an event has occurred as well as provide additional information beyond the fact that the event occurred. A message can be sent from an ISR or another task. The message queue is implemented as a FIFO (First-In-First-Out) or a LIFO (Last-In-First-Out).
The Micrium OS Kernel has a built-in message queue for every task and thus an ISR or another task can directly send messages to a specific task in your application. This greatly simplifies system design, reduces RAM footprint and is much more efficient than the conventional method of allocating a separate message queue for this purpose.
Figure 9 shows that the task message queue contains 5 fields that are displayed by µC/Probe.
The 1st column indicates the current number of entries in the message queue. If messages are being consumed at an adequate rate then this value should typically be 0.
The 3rd column contains the total number of messages that can be queued at any given time. In other words, the size of the queue.
The 2nd column indicates the maximum message count observed for the queue. . If the value ever reaches the number in the 3rd column then your application is not able to handle messages as fast as they are produced. You might consider either raising the priority of the receiving task or increasing the size of the message queue to avoid losing any messages.
Figure 9 – Task Message Queue
The ‘Msg Sent Time’ indicates how long it took (in microseconds) between when the message was posted and the message was received and processed by the task.
The ‘Msg Sent Time (Max)’ is a peak detector of the ‘Msg Sent Time’. In other words, this column shows the worst case response time for processing the message.
The example code I used didn’t make use of the task message queues.
In this post we examined, via uC/Probe, a number of the statistics built into the Micrium OS Kernel, including those for stack usage, CPU usage (total and per-task), context-switch counts, and signaling times for task semaphores and queues. We'll examine the kernel's built-in code for measuring interrupt-disable and schedule-lock