Overclocking the stm32f303k8 (Nucleo-32 board)


Few weeks ago I’ve done a stupid project here in which I’ve designed a small deb board that is similar to the blue-pill board (which uses the stm32f103c8t6). The difference is that I’ve used the stm32f373c8t6. In the meantime, I’ve delayed the PCB order; because I wanted to do some changes, like use 0603 components and remove the extra pad areas for hand-soldering. And before do those changes, then one day BOOM! I’ve seen this NUCLEO-F303K8 board, that looks like the blue-pill and said, ok, lets order this and screw the custom board, who cares. Therefore, as a proper engineer, I’ve order two boards (one is never enough) without even read the details and after a couple of days these boards arrived.

Double win; I thought. It shipped faster and I don’t have to do any soldering. So, I got home and opened the packaging and BOOM! WTF is this?!? Why does it have 2 mcus? Where is the connector with the SWDIO and SWCLK pins to flash it with my st-link? Why there’s also an stm32f103c8t6 on the back side of the PCB? What’s going on here?

Oh, wait… Oh, nooo… And then I’ve realized all the reasons why you should always read the text around the pictures before you buy something. Of course, I’ve spontaneously forgiven myself, so I can repeat the same mistake the next time. After reading the manual (always RTFM), I’ve realized that the USB port is not connected on the stm32f303 and the stm32f103 was the on board st-link and I’ve also realized the reason that the board was a bit more expensive than I expected (though Arrow had a lower price compared to others). I’ve also realized that the stm32f303 was the small brother of the family, so no USB anyways, no much RAM and several other things. But, it still has all the other goodies like 2x DACs, opamp, ADCs, I2C, SPI e.t.c. Enough goodies for my happiness meter not to drop that much.


After the initial sock, I said to myself, ok let’s do something simple and stupid at least, just to see that it works. I’ve plugged the USB cable, the leds were flickering, cool. Then I’ve seen this ARM mbed enabled thing on the case and although I was sure that it would be something silly, I wanted to try it. So, after spending ~30mins I found out that I’m not the only one that does stupid things. ARM elevates that domain to a very high standard, which is difficult to compete. I’ve tried mbed before, but that was a lot of years ago when it started and it was meh, but I didn’t expect it to be still so much.. meh. I mean, who the heck is going to develop on an online IDE and compiler and do development on libs that can’t have access to fix bugs or hack? What a mess. Sorry ARM, but I really can’t understand who’s using this thing. This is not a “professional” tool and Hobbyists have much better tools like the various Whateverduinos. So after loosing 30 mins from my life, I’ve decided to make one of my standard cmake templates. Simple. But… I needed also to do something stupid on top, so I’ve decided to add in my template a way to overclock the stm32f303. Neat.

And yes… In this project you’ll get an stm32f303k8 overclocked up to 128MHz (if you’re lucky). And it seems that is working just fine, at least both of my boards worked fine with the overclock.


You can download the source code from here:


This template is quite simple, just a blinking LED and a USART port @115200 baud-rate. You can use this as a cmake template to build your own stuff on top of it. So, let’s have a look in the main()function to explain what is going on there.

int main(void)
    /* Initialize clock, enable safe clocks */
    sysclk_init(SYSCLK_SOURCE_INTERNAL, SYSCLK_72MHz, 1);

    RCC_ClocksTypeDef clocks;
    /* Get system frequency */
    SystemCoreClock = clocks.SYSCLK_Frequency;
    if (SysTick_Config(SystemCoreClock / 1000)) {
        /* Capture error */
        while (1);


    /* setup uart port */
    /* set callback for uart rx */
    dbg_uart.fp_dev_uart_cb = uart_rx_parser;
    dev_timer_add((void*) &dbg_uart, 5, (void*) &dev_uart_update, &dev_timer_list);

    /* Initialize led module */
    /* Attach led module to a timer */
    dev_timer_add((void*) &led_module, led_module.tick_ms, (void*) &dev_led_update, &dev_timer_list);
    /* Add a new led to the led module */
    dev_led_set_pattern(&led_status, 0b00001111);


    printf("Program started\n");
    printf("Freq: %lu\n", clocks.SYSCLK_Frequency);

    while(1) {
        GPIOB->ODR ^= GPIO_Pin_4;

The first function is the one that sets the system clock. The board doesn’t have an external XTAL, so the internal oscillator is used. The default frequency is 72MHz, which is the maximum officialfrequency. In order to overclock to 128MHz, you need to call the same function with the SYSCLK_128MHz parameter. Also, the first parameter of the function is selecting the clock source (I’ve only tested the internal as there’s no XTAL on this board) and the last parameter for using safe clocks for the other peripheral buses when using frequencies more that 64MHz. Therefore, when overclocking to 128MHz (or even 72MHz) if you want to be sure that you don’t exceed the official maximum frequencies for the PCLK1 and PCLK2 then set this flag to 1. Otherwise, everything will be overclocked. I live dangerously, so I’m always leaving this to 0. If that mcu had a USB device controller then when oveclocking it can’t succeed the frequency that is needed for the USB, so you need to stay at 72MHz max (only if you have a USB device, but the 303 doesn’t have anyways).

After the clock setting, you need to update the SystemCoreClock value with the new one and you can call the print_clocks(&clocks)function to display all the new clock values and verify that are correct. For example, in my case with the sys clock set to 128MHz I get this output from the com port:

Freq: 128000000
  HCLK: 128000000
  SYSCLK: 128000000
  PCLK1: 64000000
  PCLK2: 128000000
  HRTIM1CLK: 128000000
  ADC12CLK: 128000000
  ADC34CLK: 128000000
  USART1CLK: 64000000
  USART2CLK: 64000000
  USART3CLK: 64000000
  UART4CLK: 64000000
  UART5CLK: 64000000
  I2C1CLK: 8000000
  I2C2CLK: 8000000
  I2C3CLK: 8000000
  TIM1CLK: 128000000
  TIM2CLK: 0
  TIM3CLK: 0
  TIM8CLK: 128000000
  TIM15CLK: 128000000
  TIM16CLK: 128000000
  TIM17CLK: 536873712
  TIM20CLK: 128000000

I’m using the CuteCom for the serial port terminal, not only because I’m a contributor, but also these macros plugin I’ve added is very useful for doing development with mcus and UARTs.

On the above output, don’t mind the TIM17CLKvalue, there’s no such timer anyways.

To do some benchmarks with the different speeds, then you need to uncomment the #define BENCHMARK_MODEline in the main.cfile. By doing this the D12 pin (PB.4) will just toggle inside the main loop. So… it’s not really a benchmark, but still is something that is affected a lot by the system clock frequency.

The rest of the lines in the main()are just for setting up the LED and the UART module. I’ve developed those modules initially for the stm32f103, but it was easy to port them for the stm32f303.

One very important note though! In order to make the clock speeds to work properly, I had to make a change inside the official standard peripheral library. More specific in the `source/libs/StdPeriph_Driver/src/stm32f30x_rcc.c` file in line 876, I had to do this:

        /* HSI oscillator clock divided by 2 selected as PLL clock entry */
//        pllclk = (HSI_VALUE >> 1) * pllmull;
        pllclk = HSI_VALUE * pllmull;

You see, by default the library divides the HSI_VALUE by 2 (shifts 1 right), which is wrong (?). They probably do this because they enforce that way the /2 division of the HSI and do all the clock calculations depending on that. Therefore, if you overclock ,then all the calculated clock values for the other peripherals will be wrong because they are based on the assumption that the HSI value is always divided by 2. But that’s not true. Therefore, the baud rate of the USART will be wrong and although you set it to 115200bps, actually it will be 230400bps. If you want to use the PLL with the HSI as an input clock then you need a way to do fix this. Therefore, I’ve changed this line in the standard peripher library, so have that in mind in case you’re porting code to other projects. Also, have a look in the README.mdfile in the repo for additional info.


Finally! Let’s see some numbers. Again, this benchmark is stupid, just a toggling pin and measuring how the toggling frequency is affected from the system clock frequency. I’m adding all the images on the same collection so you can click on them and scroll.

Note: The code is build with the -O3optimization flag.

So, in the above images you can see the frequency of the toggling pin for each system clock, but let’s add them on this table, too.

System clock (MHz) Toggling speed (MHz)
16 1.33
32 2.66
64 3.55
72 3.98
128 7.09

Indeed the toggling frequency scales with the system clock, which is expected. Isn’t it? Therefore, at the maximum system clock of 128MHz the bit-banging speed goes up to 7.09MHz, which is ok’ish and definitely not impressive at all. But anyway it is what it is. At least it scales properly with the frequency.

And if you think that this is slow, then you should think that most of the people develop with the debug flags enabled in order to use the debugger and sometimes they forget that they shouldn’t use it for development unless is really needed or there’s a specific problem that you need to figure out and you can’t by adding traces in you code. So let’s see the toggling speed at 125MHz with the debug flags on.

Ouch! Do you see that? 2.45MHz @ 125MHz system clock. That’s a huge impact in the speed; therefore, try not using debuggers when they are not essential to solve problems. From my experience, I’ve used a debugger maybe 5-6 times the last 15 years. My debugger was always a toggling pin and a trace with the uart port. Yeah, I know, that might take more time to debug something, but usually it’s not something that you do a lot. This time, I had to use it though, because in the standard peripheral library for this mcu, they’ve changed the interrupt names and because of that the UART interrupts were sending the code to the `LoopForever` asm in the source/libs/startup/startup_stm32.s. By using the debugger I’ve just seen that the interrupt was sending the code in there and then in the same file I’ve seen that the interrupt names for the USART1 and 2, were different compared to the stm32f103. That happened because I’ve ported my dev_uartlibrary from the stm32f103 to the stm32f303.


Before you buy something, always RTFM! That will save you some time and money. Nevertheless, I’m not completely disappointed for ordering this module. I’m more disappointed that I’ve seen that mbed is what it is. That’s sad. I think the time that some brilliant guys spend for this and I can’t really understand why. Anyway…

On the other hand, those mcus seem to be behave just fine at 125MHz. I’ve tried both modules and both were working fine. I guess I’ll do something with them in the near future. The on-board st-link is also (I guess) a cool feature, because you have everything you need on one module, but I would prefer if I just had the SW pins to use an external st-link and buy more modules in the half price (or even less). If anyone is thinking buying one of those, then they seem to be ok’ish.

Have fun!

Linux and the I2C and SPI interfaces and the PREEMPT-RT


In the previous post stupid project I’ve implemented a very simple I2C and SPI device with an Arduino that was interfaced with a raspberry pi and there I’ve done some tests by using various ways to communicate with the Arduino. There I’ve pretty much showed that for those two buses it doesn’t really matter if you write a kernel driver or you just use the user space to access the devices. Well, if you think about it, spidev is also a kernel driver and also when you access the I2C from the user-space there’s still a kernel driver that does the work. So writing a driver for a specific subsystem instead of just interface them with a custom user-space tool has its uses, but it’s also not necessary and needs some consideration before you decide to go to either way.

Still, there was something interesting missing from that stupid project. What about speed and performance? What are your options? How these options affect the general performance? Is there a magic solution for all the cases? Well, with this stupid project I will try to fail to answer those questions and make it even more worse to give any valuable information and answers to these questions.


Nanopi neo

For the last stupid project I’ve used a raspberry pi, but I’ve also provided the sources for you to use also the nanopi neo. This time I won’t be that kind so I’ll only use the nanopi neo. The reason is that I like its small format and it’s quite capable for the task and also I didn’t want to use a powerful board for this. So, it’s a low-mid tier board and also dirty cheap.


Last time I’ve used the Arduino nano. Well, nano it’s an excellent board to implement an I2C/SPI slave device in… minutes. But, it lacks performance. So, it’s completely incapable to stress out the nanopi neo. Therefore, we need something much faster and here comes the STM32F103 as it has all the good stuff in there. 72MHz (or up to 128MHz if you overclock it, which I did, of course), DMA for both the I2C and SPI and also very, very, very cheap (I’m talking about the famous blue-pill). Therefore, I’ve implemented an I2C and SPI slave that both use DMA for fast data transfers.

Other components

The rest of the components are exactly the same with the previous stupid project. So we have a whateverphoto-resistor (it doesn’t really matter) and a whatever LED.


This stupid project is focused actually on the Linux kernel. As everyone learns now in school the kernel comes in two main flavors, the SMP and PREEMPT-RT kernel. The first in the normal mainline kernel and the second one is the real-time patched version of the mainline. I won’t get into the details, but just to simplify, the main difference is that the PREEMPT-RT kernel, actually guarantees that any process that runs in the CPU will get a fair and predictable time of execution, which minimizes the latency of the application. Oversimplified, but that’s not a post about the Linux kernel.

Therefore, what happens if you have a couple of fastdevices that you want to interface under various conditions, like the CPU has a low or heavy background load? To find this out we actually need a fast slave, therefore the stm32f103 is just right for that, as we’ve seen in this stupid project that the SPI can achieve up to 63MHz by using DMA, which is way faster that the Arduino nano (and probably even the nanopi neo actually). So, by assuring that the slave device won’t be our bottleneck were good to go. Here you’ll find the repo for the project:


In order to build the nanopi neo image for both SMP and RT kernel you need Yocto, but again I won’t get into the details on that. Therefore, to switch between the SMP and RT kernel you need to use either of the following combinations in the build/conf/local.conf file:

PREFERRED_PROVIDER_virtual/kernel = "linux-stable"
PREFERRED_VERSION_linux-stable = "4.14%"

Or for the PREEMPT-RT:

PREFERRED_PROVIDER_virtual/kernel = "linux-stable-rt"
PREFERRED_VERSION_linux-stable-rt = "4.14%"

Also you should build the `​arduino-test-image` like the previous project (it’s the same image actually).

So, now let’s go to some fancy stuff. In the repo there is a tool in the linux-appfolder. You need to build this with the Yocto SDK or any other arm toolchain. This tool actually opens the i2c and spi devices and reads/writes data from them in a loop according to the options you’re passing in the command.

To make it a bit different, compared to the previous project, this time the SPI slave is the photo-resistor ADC and the I2C slave is the PWM LED (it was the opposite in the previous). Anyway, that doesn’t really matters, you can change that in the source code of the stm32f103 which is also available in the repo and you need also to build that and flash it on the mcu. Pretty much if you read the previous README file, it’s quite the same thing.


I’ve performed the benchmarks with the stm32f103 running on 72MHz and 128MHz, too; but there wasn’t any difference at all really, as I’ve limited the SPI bandwidth to 30MHz. The reason for that was actually the cabling that causing a lot of errors above that frequency and it seems that the problem was the nanopi neo and not the stm32f103. Still, the results are interesting and I was able to get valuable information.

I’ve performed various tests. First with two different kernels, SMP and PREEMPT-RT. Then for each kernel I’ve tested a range of SPI frequencies (50KHz, 1MHz, 2MHz, 5MHz, 10MHz, 20MHZ, 30MHz). The provided tool does that automatically, actually. Then for all the above cases I’ve tested the kernel with no load, then with a light load and then with heavy load. The light load was, guess what? printf of course. Well, printf might sound silly, but in a while loop does the trick because the uart buffer fills up pretty quickly and then the kernel will have to flush the buffer and send the data. For the heavy load I’ve just used the Linux stress tool. I’ve also included a calc file in the repo with my results.

So let’s get to the fun stuff. No. Before that I want also to say that there were two kind of benchmarks the first was a pin on the stm32f103 which was toggling every time a full read/write cycle was performed. That means that the Linux app was reading the ADC value of the photoresistor from the stm32f013, then it was writing that value to the I2C PWM LED on the stm32f103. Every time this cycle was performed a pin from the stm32f103 is toggling state. Therefore by measuring the time of a single pulse you actually get the time of the cycle.

Before proceed these are the kernel versions for the SMP and PREEMPT-RT kernels:

Linux nanopi-neo 4.14.87-allwinner #1 SMP Wed Dec 26 15:26:48 UTC 2018 armv7l GNU/Linux
Linux nanopi-neo 4.14.78-rt47-allwinner #1 SMP PREEMPT RT Mon Jan 21 20:12:29 UTC 2019 armv7l GNU/Linux
SMP kernel

Let’s see the first three images. These are some oscilloscope probings with the SMP kernel and the SPI frequency at 500KHz, which is a very low frequency.

The first image is a zoom in on the stm32f103’s toggle pin. As I’ve said two toggles (or a pulse if you prefer) is a full read/write cycle, which means that in this time a 16-bit word on the SPI and a 3-bytes on the I2C are transferred. Because the I2C is much slower (100KHz) it has a strong affect on the speed compared to the SPI, but we try to emulate a real-life scenario here. In this case, the average cycle time is 475 μsecs (you see that the average is calculated for both low and high pulse).

The second and third screenshot display the toggle pin output when we’re running the light load with the printf. Wow! What’s going there? You see there are large gaps between the read/write cycles. Why? Well, that’s because of printf and the UART interface. UART is the dinosaur of the comm peripherals and its sloooow. In this case has only a 115200 bps baudrate. And why there are gaps, you’ll think. There are gaps because the printf is inside a while loop, which means that it fills the kernel UART buffer very fast and at some point the kernel needs to flush this buffer in order to create more space for the next bytes. During the flush is occupied with the UART peripheral which doesn’t support DMA in this case and there you have it… Huge gaps where the kernel flushes the UART buffer. We can extract valuable information from this, though. The middle picture shows that the kernel spends 328 ms to empty the UART buffer (see the BX-AX value on the top left corner). During this time you a gap. Then in the last picture you see that for the next 504 ms performs read/writes on the I2C/SPI. This behavior is with the default kernel affinity and scheduler priority per process.

Now let’s see the same output when the SPI is set to 30MHz which for the current setup seems to be the maximum frequency without getting errors.

In the first picture we see now that a full SPI/I2C read/write cycle takes 335 μsecs which is much faster compared to the 500KHz SPI speed. You can also see that the printf time is 550 ms and the SPI/I2C read/write time 218 ms, which means that the kernel uses the CPU for almost the same amount of time to empty the printf buffer, but also the SPI/I2C transactions are using the CPU for almost the half time. That seems that the kernel CPU time is tied to the SPI/I2C statistics.

Now let’s use the user-space tool to get some different numbers. In this case I’ll run the benchmark mode of the tool which counts the SPI/I2C read/write cycles per second. Each second is an iteration. Therefore, the tool also takes a parameter for how many iterations it will run. For example the following command, means that the tool will use /dev/i2c-0 and /dev/spidev-0 in benchmark mode (-m 1) and 20 iterations/runs/seconds (-r 20) and with the printf (light load) disabled (-p 0).

./linux-app -i /dev/i2c-0 -s /dev/spidev0.0 -m 1 -r 20 -p 0

After this test runs it will print some result, for example:

        SPI speed: 1000000 Hz (1000 KHz)
        SPI speed: 2000000 Hz (2000 KHz)
        SPI speed: 5000000 Hz (5000 KHz)
        SPI speed: 10000000 Hz (10000 KHz)
        SPI speed: 20000000 Hz (20000 KHz)
        SPI speed: 30000000 Hz (30000 KHz)

There you see that for each SPI speed the number of SPI/I2C read/write cycles are counted and printed. I won’t paste other data here, but I’ll use only the average values instead. You can have a look to the second sheet spreadsheet in the calc ods file for all the data.

So let’s see the average values when we use the benchmark mode, for 20 runs and the printf on and off.

SMP -m 1 -r 20 -p 0 -m 1 -r 20 -p 1
1MHz 2750.75 1561.55
2MHz 2843.75 1499.45
5MHz 2938.78 1427.05
10MHz 2936.2 1450.65
20MHz 2987 1902.6
30MHz 2986.6 1902.65

From the above table I make the following conclusions, there are almost twice as much SPI/I2C read/write cycles with the printf enabled in the loop and when using the SMP kernel. Wow, nobody expected that… And that when there no printf then after the 5MHz there’s no much difference in the number of cycles, but there is difference when the printf is enabled especially after the 20MHz. Anyway, as expected the faster the clock the more cycle counts.

But let’s now also enable a quite heavy load in the background and re-run those tests to see what happens. The full command I’ve used for the stress tools is:

stress --cpu 4 --io 4 --vm 2 --vm-bytes 128M

The above command means that the stress tool will utilize 4 cores, spawns 4 worker threads and two extra threads that spinning a malloc/free of 128MB each to utilize memory load. And these are the averages:

SMP -m 1 -r 20 -p 0 -m 1 -r 20 -p 1
1MHz 1733.7 1155.5
2MHz 1874.95 1186.9
5MHz 1760.65 1196.9
10MHz 1731.4 1154.65
20MHz 1698.7 1170.2
30MHz 1826.7 1298.75

Now with the background heave load we see of course a huge drop in the performance for the SMP kernel, for both cases with either the printf on or off. Here the frequency doesn’t really have a great impact, but it still performs better. Any increase in the performance that is more that the statistic error is welcome. Therefore even those 100-200 full read/write cycles are a better performance, it just doesn’t scale uniform as the previous  example that there wasn’t a background load.

Now let’s see the PREEMPT-RT kernel…


Let’s have a look to a couple of oscilloscope probings like we did in case of the SMT kernel.

In this case the average time for a full SPI/I2C cycle is 476 μsec. You can also see that the kernel performs read/write cycles for 504 ms and also spends 324 ms to flush the UART buffer. I will make the conclusions about how the SMP and the PREEMPT-RT are compared in the next section, so I’m continuing with the rest of the benchmarks.

These are the two tables for the PREEMPT-RT kernel with the benchmark result as the previous example.

PREEMPT-RT -m 1 -r 20 -p 0  -m 1 -r 20 -p 1
1MHz 2249.7 1448.55
2MHz 2254.2 1444.65
5MHz 2273.89 1447.55
10MHz 2281.45 1457.95
20MHz 2286.9 1457.55
30MHz 2297.85 1458.7

So, this means that the light load that printf adds to the CPU has a huge affect on the performance, although the kernel is the real-time kernel. That’s expected though, because real-time doesn’t mean that you’ll get the performance from each process that you get it’s running as the only process in the CPU, it just means that the scheduler will be fair and the process is guaranteed to get a minimum amount of time to execute frequently. Therefore, the whole performance is affected from any additional load as in the case of the SMP.

Now let’s see what’s happening when there’s a heavy background load as before. So I’ve used the exact same parameters for the stress tool as before and these are the results I’ve got.

PREEMPT-RT -m 1 -r 20 -p 0   -m 1 -r 20 -p 1
1MHz 1815.15 1398.7
2MHz 1930.25 1443.35
5MHz 1963.55 1399.9
10MHz 1929.9 1441.5
20MHz 2045.65 1472
30MHz 2002.05 1442.65

So, what we see here? Wow, right? There’s really no big difference compared to the previous table. It seems that although the load now is much higher, the performance impact was quite low. Why’s that? Well, that’s what the RT kernel does, it makes sure that your process will get a fair time to run and it will preempt other processes frequently, so there’s no process that will occupy the CPU more time that the others. Again, the printf has a great impact, because the problem relies in the implementation and there’s no DMA to unload the CPU from the task of sending bytes over the UART.


So let’s compare the results that we have from the two kernels and see what we got. I’ll create two new tables with the sum of the results for the light and heavy load. This is the table without the heavy background load.

1MHz 2750.75 2249.7 1561.55 1448.55
2MHz 2843.75 2254.2 1499.45 1444.65
5MHz 2938.79 2273.89 1427.05 1447.55
10MHz 2939.2 2281.45 1450.65 1457.95
20MHz 2987 2286.9 1902.6 1457.55
30MHz 2986.6 2297.85 1902.65 1458.7

In this table we see that without any load, the SMP kernel is much faster compared to RT. That’s happening because the scheduler is not really fair, but gives as much as processing time to the SPI/I2C and the benchmark tool as the rest of the processes are idle. Quite the same happens for the RT without the load, but still the CPU is forced to switch between also other tasks and processes that don’t have much to do, so the scheduler is more “fair”.

On the next two columns, the impact of the printf in the while loop has a huge affect on both kernels. Nevertheless, the SMP kernel gives more processing time to the benchmark tool and the SPI/I2C interfaces, therefore the SMP has approx. 450 more read/write cycles more in higher frequencies.

Another thing is obvious from the table is that the SPI/I2C read/writes scale with the frequency increment and the RT kernel is not. So for the RT kernel it doesn’t matter if the SPI bus is running on 1MHz or 30MHz. Cool, right? So that means that if you’re running on a RT kernel you don’t have to worry in optimizing your SPI to achieve the max frequency, because it doesn’t make any difference. But on the SMP you should definitely do any optimizations.

So in this case, it seems that the SMP kernel is much, much better for such use scenarios. What are those scenarios? Well, SPI displays are definitely one of those, for example. And this most probably is the same for every other peripheral that demands a high throughput (e.g. pcie, USB, e.t.c.)

Now let’s go to the next table that the benchmark is running with a heavy load in the background.

1MHz 1733.7 1815.15 1155.5 1398.7
2MHz 1874.95 1930.25 1186.9 1443.35
5MHz 1760.65 1963.55 1196.9 1399.9
10MHz 1731.4 1929.9 1154.65 1441.5
20MHz 1698.7 2045.65 1170.2 1472
30MHz 1826.7 2002.05 1298.75 1441.65

Wait, what? What happened here? In all benchmarks the RT kernel not only scores higher, but also if you see the full table in the calc file, you’ll see that there is smooth and consistent performance between each SPI/I2C read/write cycle for the RT kernel. The SMP kernel from the other hand, has a great variation between the cycles and also the average performance is lower. The performance difference between the SMP and RT is not huge, but its substantial. Who doesn’t want 100,200 or even 300 more SPI/I2C read/write cycles per second, right?

So what happened here? Well, as I’ve mentioned before, the RT scheduler is fair. Therefore, for the RT kernel you get the almost the same performance as you get with a lower load, because the kernel will more or less assign the CPU for the same amount of time. But, the performance on the SMP is getting a great impact, because now the kernel needs to assign more time to other processes that may need the kernel for more time. Hence, this difference between the last two tables.

OK, so what’s better then? What should I use? Which is better? Well… that depends. What are your needs? What your device is doing? For example, if you want to drive an SPI display with the max framerate possible then forget about RT, but on the same time make sure that there’s no other processes in your system that load the CPU that much, because then your framerate will drop even more compared to the RT kernel. Then why use the RT kernel? You would use the RT kernel if your system needs to perform specific tasks in a predictable way, even under heavy load. An example of that for example is audio or let’s say that you drive motors and you need minimum latency under every circumstance (no load, mid load, high load). In most cases the SMP kernel is what you need when a lot of IOs and a high throughput is needed and also almost every other case, except when you need low latency and predictable execution.

Another thing needs notice here is that the RT kernel is not just a plug n play thing that you boot in you OS and everything works just as fine as with SMP. Instead, there may a lot of underlying issues and bugs in there, that may have an undefined behaviour which is not triggered with the SMP kernel. This means that some drivers, subsystems, modules or interfaces, even a hardware may don’t be stable with the RT kernel. Of course, the same goes for the SMP, but at least the SMP is used much more widely and those issues are come to the surface and fixed sooner, compared to the RT kernel. Also, if your kernel is not a mainline kernel then it’s a hard and long process to convert it to a fully PREEMPT-RT kernel, as the patches for the RT kernel are targeting the mainline kernel only. So until all the PREEMP-RT patches become mainline and also we get to the point that your hardware supports those mainline versions, might take a looong time.

This post is just a stupid project and is not meant to be an extensive review, benchmark or versus between the SMP and the PREEMPT-RT. Don’t forget where you are. This is a stupid projects blog. And for that reason let’s see the SPI protoresistor and I2C PWM LED in action.

Have fun!


Linux and the I2C and SPI interfaces


Most of the people that reading that blog, except that they don’t have anything more interesting to do, are probably more familiar with the lower embedded stuff. I like the baremetal embedded stuff. Everything is simple, straight-forward, you work with registers and close to the hardware and you avoid all the bloatware between the hardware and the more complicated software layers, like an RTOS. Also, the code is faster, more efficient and you have the full control of everything. When you write a firmware, you’re the God.

And then an RTOS comes and says, well, I’m the god and I may give you some resources and time to spend with my CPU to run your pity firmware or buggy code. Then you become a semi-god, at best. So Linux is one of those annoying RTOSes that demotes you to a simple peasant and allows you to use only a part of the resources and only when it decides to. On the other hand, though, it gives you back a lot more benefits, like supporting multiple architectures and have an API for both the kernel and the user-space that you can re-use among those different architectures and hardware.

So, in this stupid project we’ll see how we can use a couple of hardware interfaces like I2C and SPI to establish a communication between the kernel and an external hardware. This will unveil the differences between those two worlds and you’ll see how those interfaces can be used in Linux in different ways and if one of those ways are better than the other.

Just a note here: I’ve tested this project in two different boards, on a raspberry pi 3 model B+ and on a nano-pi neo. Using the rpi is easier for most of the people, but I prefer the nano-pi neo as it’s smaller and much cheaper and it has everything you need. Therefore, in this post I will explain (not very thorough) how to make it work on the rpi, but in the README.md file in the project repo, you’ll find how to use Yocto to build a custom image for the nano-pi and do the same thing. In case of the nano-pi you can also use a distro like armbian and build the modules by using the sources of the armbian build. There are so many ways to do this, so I’ll only focus one way here.



I tried to keep everything simple and cheap. For the Linux OS I’ve chosen to use the nanopi-neo board. This board costs around ~$12 and it has an Allwinner H3 cpu @800 or @1200MHz and a 512MB RAM. It also has various other interfaces, but we only care about the I2C and the SPI. This is the board:

You can see the full specs and pinout description here. You will find the guide how to use the nano-pi neo in the repo README.md file in here:


Raspberry pi

I have several raspberry pi flying around here, but in this case I’ll use the latest Raspberry Pi 3 Model B+. That way I can justify to my self that I bought it for a reason and feel a bit better. In this guide I will explain how to make this stupid project with the rpi, as most of the people have access to this rather to a nano pi.

Arduino nano

Next piece of hardware is the arduino-nano. Why arduino? Well, it’s fast and easy, that’s why. I think the arduino is both bless and curse. If you have worked a lot with the baremetal stuff, arduino is like miracle. You just write a few line of codes and everything works. On the other hand, it’s also a trap. If you write a lot of code in there, you end up losing the reality of the baremetal and you become more and more lazy and forget about the real hardware. Anyway, because there’s no much time now, the Arduino API will do just fine! This is the Arduino nano:

Other stuff

You will also need a photoresistor, a LED and a couple of resistors. I’ll explain why in the next section. The photoresistor I’m using is a pre-historic component I’ve found in my inventory and the part name is VT33A603/2, but you can use whatever you have, it doesn’t really matter. Also, I’m using an orange led with a forward voltage of around 2.4V @ 70mA.


OK, that’s fine. But what’s the stupid project this time? I’ve though the most stupid thing you can build and would be a nice addition to my series of stupid projects. Let’s take a photo-resistor and the Arduino mini and use an ADC to read the resistance and then use the I2C interface as a slave to send the raw resistance value. This value actually the amount of the light energy that the resistor senses and therefore if you have the datasheets you can convert this raw resistance to a something more meaningful like lumens or whatever. But we don’t really care about this, we only care about the raw value, which will be something between 0 and 1023, as the avr mega328p (arduino nano) has a 10-bit ADC. Beautiful.

So, what we do with this photo-resistor? Well, we also use a PWM channel from the Arduino nano and we will drive a LED! The duty cycle of the PWM will control the LED brightness and we feed the mega328p with that value by using the SPI bus, so the Arduino will be also an SPI slave. The SPI word length will be 16-bit and from those only the 10-bit will be effective (same length as the ADC).

Yes, you’ve guessed right. From the Linux OS, we will read the raw photo-resistor value using the I2C interface and then feed back this value to the PWM LED using the SPI interface. Therefore, the LED will be brighter when we have more light in less bright in dark conditions. It’s like the auto-brightness of your mobile phone’s screen? Stupid right? Useless, but let’s see how to do that.


First you need to connect the photo-resistor and the LED to the Arduino nano as it’s shown in the next schematic.

As you can see the D3 pin of the Arduino nano will be the PWM output that drives the LED and the A3 pin is the ADC input. In my case I’ve used a 75Ω resistor to drive the LED to increase the brightness range, but you might have to use a higher value. It’s better to use a LED that can handle a current near the maximum current output of the Arduino nano. The same goes for the resistor that creates a voltage divider with the photo-resistor; use a value that is correct for your components.

Next you need to connect the I2C and SPI pins between the Arduino and the rpi (or nanopi-neo). Have in mind that the nano-pi neo has an rpi compatible pinout, so it’s the same pins. These are the connections:/

Signal Arduino RPI (and nano-pi neo)
/SS D10 24 (SPI0_CS)
SCK D13 23 (SPI0_CLK)
SDA A4 3 (I2C0_SDA)
SCL A5 5 (I2C0_SCL)

You will need to use two pull-up resistors for the SDA and SCL. I’ve used 10ΚΩ resistors, but you may have to check with an oscilloscope to choose the proper values.


You need to flash the Arduino with the proper firmware and also boot up the nanopi-neo with a proper Linux distribution. For the second thing you have two option that I’ll explain later. So, clone this repo from here:


There you will find the Arduino sketch in the arduino-i2c-spi folder. Use the Arduino IDE to build and to upload the firmware.

For the rpi download the standard raspbian strech lite image from here and flash an SD card with it. Don’t use the desktop image, because you won’t need any gui for this guide and also the lite image is faster. After you flash the image and boot the board there are a few debug tweaks you can do, like remove the root password and allow root passwordless connections. Yeah, I know it’s against the safety guidelines, don’t do this at your home and blah blah, but who cares? You don’t suppose to run your NAS with your dirty secrets on this thing, it’s only for testing this stupid project.

To remove the root password run this:

passwd -d root

Then just for fun also edit your /etc/passwdfile and remove the xfrom the root:x:... line. Then edit your /etc/ssh/sshd_config and edit it so the following lines are like that:

PermitRootLogin yes
#PubkeyAuthentication yes
PasswordAuthentication yes
PermitEmptyPasswords yes
UsePAM no

Now you should be able to ssh easily to the board like this:

ssh root@

Anyway, just flash the arduino firmware and the raspbian distro and then do all the proper connections and boot up everything.


Here’s the interesting part. How can you retrieve and send data from the nanopi-neo to the Arduino board? Of course, by using the I2C and the SPI, but how can you use those things inside Linux. Well, there are many ways and we’ll see a few of them.

Before you boot the raspbian image on the rpi3, you need to edit the /boot/config.txt and add this in the end of the file, in order to enable the uart console.

Raw access from user space using bash

Bash is cool (or any other shell). You can use it to read and write raw data from almost any hardware interface. From the bitbucket repo, have a look in the bash-spidev-example.sh. That’s a simple bash script that is able to read data from the I2C and then send data to the SPI using the spidev module. To do that the only thing you need to take care of is to load the spidev overlay and install the spi-tools. The problem with the debian stretch repos is that the spi-tools is not in the available packages, so you need to either build it yourself. To do this, just login as root and run the following commands on the armbian shell:

apt-get install -y git
apt-get install -y autotools-dev
apt-get install -y dh-autoreconf
apt-get install -y i2c-tools
git clone https://github.com/cpb-/spi-tools.git
cd spi-tools
autoreconf -fim
make install

Now that you have all the needed tools installed you need to enable the i2c0 and the spidev modules on the raspberry pi. To do that add this run the raspi-config tool and then browse to the Interfacing options and enable both I2C and SPI and then reboot. After that you will be able to see that there are these devices:



This means that you can use one of the bash scripts I’ve provided with the repo to read the photo-resistor value and also send a pwm value to the LED. First you need to copy the scripts from you workstation to the rpi, using scp (I assume that the IP is

cd bash-scripts
scp *.sh root@

Before you run the script you need to properly set up the SPI interface, using the spi-config tool that you installed earlier, otherwise the default speed is too high. To get the current SPI settings run this command:

pi-config -d /dev/spidev0.0 -q

If you don’t get the following then you need to configure the SPI.

/dev/spidev0.0: mode=0, lsb=0, bits=8, speed=10000000, spiready=0

To configure the SPI, run these commands:

spi-config -d /dev/spidev0.0 -m 0 -l 0 -b 8 -s 1000000

With the first command you configure the SPI in order to be consistent with the Arduino SPI bus configuration. Then you run the script. If you see the script then the sleep function is sleep for 0.05 secs, or 50ms. We do that for benchmark. Therefore, I’ve used the oscilloscope to see the time difference every SPI packet and the average is around 66ms (screenshots later on), instead of 50. Of course that includes the time to read from the I2C and also send to the SPI. Also, I’ve seen a few I2C failures with the following error:

mPTHD: 0x036e
mError: Read failed
PTHD: 0x
n\PTHD: 0x036d
xPTHD: 0x036e

Anyway, we’ve seen that way that we are able to read from the I2C and write to the SPI, without the need to write any custom drivers. Instead we used the spidev module which is available to the mainline kernel and also a couple of user-space tools. Cool!

Using a custom user-space utility in C

Now we’re going to write a small C utility that opens the /dev/i2c-1 and the /dev/spidev0.0 devices and read/writes to them like they are files. To do that you need to compile a small tool. You can do that on the rpi, but we’ll need to build some kernel modules later so let’s use a toolchain to do that.

The toolchain I’ve use is the `gcc-linaro-7.2.1-2017.11-x86_64_arm-linux-gnueabihf` and you can download it from here. You may seen that this is a 32-bit toolchain. Yep, although rpi is a 64-bit cpu, raspbian by default is 32-bit. Then just extract it to a folder (this folder in my case is /opt/toolchains) and then you can cross-build the tool to your desktop with these commands:

export CC=/opt/toolchains/gcc-linaro-7.2.1-2017.11-x86_64_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-gcc
${CC} -o linux-app linux-app.c
scp linux-app root@

Then on the rpi terminal run this:

./linux-app /dev/i2c-1 /dev/spidev0.0

Again in the code I’ve used a sleep of 50ms and this time the oscilloscope shows that the average time between SPI packets is ~50ms (screenshots later on), which is a lot faster compared to the bash script. In the picture you see the average is 83, but that’s because sometimes there are some delays because of the OS like 200+ms, but that’s quite expected in a non PREEMPT-RT kernel. Also, I’ve noticed that there was no missing I2C packet with the executable. Nice!

You won’t need the spidev anymore, so you can run the raspi-config and disable the SPI, but leave the I2C as it is. Also, in the /boot/config.txt make sure that you have those:


Now reboot.

Use custom kernel driver modules

Now the more interesting stuff. Let’s build some drivers and use the device-tree to load the modules and see how the kernel really handles these type of devices using the IIO and the LED subsystems. First let’s build the IIO module. To do that you need first to set up the rpi kernel and the cross-toolchain to your workstation. To do that you need first to get the kernel from git and run some commands to prepare it.

git clone https://github.com/raspberrypi/linux.git
cd linux

Now you need to checkout the correct hash/tag. To do this run this command to rpi console.

uname -a

In my case I get this:

Linux raspberrypi 4.14.79-v7+ #1159 SMP Sun Nov 4 17:50:20 GMT 2018 armv7l GNU/Linux

That means that the date this kernel was build was 04.11.2018. Then in the kernel repo to your workstation, run this:

git tag

And you will get a list of tags with dates. In my case the `tag:raspberrypi-kernel_1.20181112-1` seems to be the correct one, so check out to the one that is appopriate for your kernel, e.g.

git checkout tag:raspberrypi-kernel_1.20181112-1

Then run these commands:

export KERNEL=kernel7
export ARCH=arm
export CROSS_COMPILE=/opt/toolchains/gcc-linaro-7.2.1-2017.11-x86_64_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-
export KERNEL_SRC=/rnd2/linux-dev/rpi-linux
make bcm2709_defconfig
make -j32 zImage modules dtbs

This will build the kernel, the modules and the device-tree files and it will create the Module.symversthat is needed to build our custom modules. Now on the same console (that the above parameters are exported) run this from the repo top level.

cd kernel_iio_driver
dtc -I dts -O dtb -o rpi-i2c-ard101ph.dtbo rpi-i2c-ard101ph.dts
scp ard101ph.ko root@
scp rpi-i2c-ard101ph.dtbo root@

Now, do the same thing for the led driver:

cd kernel_led_driver
dtc -I dts -O dtb -o rpi-spi-ardled.dtbo rpi-spi-ardled.dts
scp ardled.ko root@
scp rpi-spi-ardled.dtbo root@

And then run these commands to the rpi terminal:

mv ard101ph.ko /lib/modules/$(uname -r)/kernel/
mv ardled.ko /lib/modules/$(uname -r)/kernel/drivers/leds/drivers/iio/light/
depmod -a

And finally, edit the /boot/config.txtfile and make sure that those lines are in there:


And now reboot with this command:

systemctl reboot

After the reboot (and if everything went alright) you should be able to see the two new devices and also be able to read and write data like this:

cat /sys/bus/iio/devices/iio\:device0/in_illuminance_raw
echo 520 > /sys/bus/spi/devices/spi0.0/leds/ardled-0/brightness

Have in mind, that this is the 4.14.y kernel version and if you’re reading this and you have a newer version then a lot of things might be different.

So, now that we have our two awesome devices, we can re-write the bash script in order to use those kernel devices now. Well, the script is already in the bash-scripts folder so just scp the scripts and run this on the rpi:


The oscilloscope now shows an average period of 62.5ms, which is a bit faster compared to the raw read/write from bash in the first example, but the difference is too small to be significant.


Let’s see some pictures of the various oscilloscope screenshots. The first one is from the bash script and the spidev module:


The second is from the linux-app program that also used spidev and the /dev/i2c.

And the last one is by using the kernel’s iio and led subsystems.

So let’s make some conclusions about the speed in those different cases. It’s pretty clear that writing an app in the user space using the spidev and the /dev/spi is definitely the way to go as it’s the best option in terms of speed and robustness. Then the difference between using a bash script to read/write to the bus with different type of drivers [spidev & /dev/i2c] vs [leds & iio] is very small.

Then, why writing a driver for iio and leds in the first place if there’s no difference in performance? Exactly. In most of the cases it’s muck easier to write a user-space tool to control this kind of devices instead of writing a driver.

Then are those subsystems useless? Well, not really. There are useful if you use them right.

Let’s see a few bullet points, why writing a user-space app using standard modules like the spidev is good:

  • No need to know the kernel internal
  • Independent from the kernel version
  • You just compile on another platform without having to deal with hardware specific stuff.
  • Portability (pretty much a generalization of the above)
  • If the user space app crashes then the kernel remains intact
  • Easier to update the app than updating a kernel module
  • Less complicate compared to kernel drivers

On the other hand, writing or having a subsystem driver also has some nice points:

  • There are already a lot of kernel modules for a variety of devices
  • You can write a user space software which is independent from the hardware. For example if the app accesses an iio device then it doesn’t have to know how to handle the device and you can just change the device at any time (as long it’s compatible in iio terms)
  • You can hide the implementation if the driver is not a module but fixed in the kernel
  • It’s better for time critical operations with tight timings or interrupts
  • Might be a bit faster compare to a generic driver (e.g. spidev)
  • Keeps the hardware isolated from the user-space (that means that the user-space devs don’t have to know how to deal with the hardware)

These are pretty much the generic points that you may read about generally. As a personal preference I would definitely go for a user-space implementation in most of the cases, even if a device requires interrupts, but it’s not time critical. I would choose to write a driver only for very time critical systems. I mean, OK, writing and know how to write kernel drivers it’s a nice skill to have today and it’s not difficult. In the bottom line, I believe most of the times, even on the embedded domain, when it comes to I2C and SPI, you don’t have to write a kernel driver, unless we talking about ultra fast SPI that is more than 50MHz with DMAs and stuff like that. But there are very few cases that really needs that, like audio/video or a lot of data. Then in 95% of the rest cases the spidev and the user-space is fine. The spidev is even used for driving framebuffers, so it’s proven already. If you work in the embedded industry, then probably you know how to do both and choose the proper solution every time; but most of the time on a mainstream product you may choose to go with a driver because it’s “proper” rather “needed”.

Anyway, in this stupid project you pretty much seen how the SPI and I2C devices are used, how to implement your own I2C and SPI device using an Arduino and then interface with it in the Linux kernel, either by using the standard available drivers (like spidev) or by writing your custom subsystem driver.

Finally, this is a video where both kernel modules are loaded and the bash script reads the photo-resistor value every 50ms via I2C and then writes the value to the brightness value of the led device.

Have fun!

STM32F373CxT6 development board


Probably most of you people that browsing places like this, you already know the famous “blue pill” board that is based on the stm32f103c8t6 mcu. Actually, you can find a few projects here that are based on this mcu. It’s my favorite one, fast, nice peripherals, overclockable and dirt cheap. The whole PCB dev board costs around 1.7 EUR… I mean, look at this.

If you try to buy the components to build one and also order the PCBs from one the known cheap makers, that would cost more than 10 EUR per unit for a small quantity. But…

There’s a new player in town

Don’t get me wrong, the stm32f103 is still an excellent board and capable to do  most of the things I can image for small stupid projects. But there are also a few things that can’t do. And this is the gap that this new mcu comes to fill in. I’m talking about the stm32f373, of course. The STM32F373CxT6 (I’ll refer to it as f373 from now on) is pin to pin compatible with the STM32F103C8T6, but some pins are different and also is like its buffed brother. The new f373 also comes in a LQFP48 package, but there are significant differences, which I’ve summed in the following table.

STM32F373C8 STM32F103C8
Core Arm Cortex-M4 Arm Cortex-M3
RAM Size (kB) 16 20
Timers (typ) (16 bit) 12 4
Timers (typ) (32 bit) 2
A/D Converters (12-bit channels) 9 10
A/D Converters (16-bit channels) 8
D/A Converters (typ) (12 bit) 3
Comparator 2
SPI (typ) 3 2
I2S (typ) 3

So, you get a faster core, with DSP, FPU and MPU, DACs, 16-bit ADCs, I2S and more SPI channels, in the same package… This MCU is mind-blowing. I love it. But, of course is more expensive. Mostly I like the DACs because you can do a lot of stupid stuff with these and then the 16-bit ADC, because more bits means less noise, especially if you use proper software filters. The frequency is the same, so I don’t expect much difference in terms of raw speed, but I’ll test it at some point.

Also it’s great that ST already released a standard peripheral library for the mcu, so you don’t have to use this HAL crap bloatware. There’s a link for that here. The StdPeriphLib supports, now makes this part my favorite one, except that fact that… I haven’t tested it yet.

Where’s my pill?

For the f103 we have the blue pill. But where’s my f373 pill? There’s nothing out there yet. Only a very expensive dev board that costs around $250. See here. That’s almost 140 “blue pills”… Damn. Therefore, I had to design a board that is similar to the “blue pill” but uses the f373. Well, some of you already thought of why not remove the f103 from the “blue pill” and solder the f373 instead and there you are, you have the board. Well, that’s true… You can do that, some pins might need some special handling of course, but where’s the fun in this? We need to build a board and do a stupid project!

Well, in case you want to replace the f103 on the “blue pill” with the f373, then this is the list with the differences in the pin out.

PIN F103 F373
21 B10 E8
22 B11 E9
25 B12 VREFSD+
26 B13 B14
27 B14 B15
28 B15 D8
35 VSS_2 F7
36 VDD_2 F6

So the problematic pins are mainly the 35 & 36 and that’s because in case of the f373 the F7 is connected straight to ground and the F6 to 3V3 and that makes them unusable. According to the manual the F7 pin has these functionalities: I2C2_SDA, USART2_CK and the F6 these: SPI1_MOSI/I2S1_SD, USART3_RTS, TIM4_CH4, I2C2_SCL. That doesn’t mean that you can’t use the I2C2 as the I2C2_SDA is also available to pin 31 (PA10) and also the SPI1_MOSI is available to pin 41 (PB5) and I2C2_SCL to pin 30 (PA9). So no much harm, except the fact that those alternative pins overlap other functionalities, you might not be able to use.

Therefore, just change the f103 on the “blue pill” with the f373, if you like; but don’t fool yourself, this won’t be stupid enough, so it’s better to build a board from the scratch.

The board

I’ve designed copied the schematics of the “blue pill” and create a similar board with a few differences though. First, the components like resistors and capacitors are 0805 with larger pads, instead of 0603 on the original “blue pill”. The reason for that is to help older people like me with bad vision and shaky hands to solder these little things on the board. I’m planning to create a branch though with 0603 parts for the younger people. Soon…

I’ve used my favorite KiCAD for that as it’s free, opensource and every open-hardware (and not only) project should use that, too. I think that after the version 5, more and more people are joining the boat, finally.

This is the repo with the KiCAD files:


And this is the 3D render of the board:


You can replace the f103 on the “blue pill” with the f373, but that’s lame. Don’t do that. Always go with the hardest, more time consuming and more expensive solution and people will always love you. I don’t know why, it just works. So, instead of replace the IC, build a much more expensive board from the scratch, order the PCBs and the components, wait for the shipment and then solder everything by yourself. Then you have a great stupid project and I’ll be proud of you.

Have fun!

Electronic load using ESP8266


Long time, no see. Well, let me tell you a thing, parenting is the end of your free time to do your stupid projects. It took me a few months just to complete a very small stupid project like this… Anyway, this time I wanted to build an electronic load. There are so many designs out there with various different user interfaces, but I wanted something more useless than a pot to control the output current. Therefore, I though why not build an electronic load with a web interface? And which is the best option for that? Of course, esp8266. Well… not really, but we’ll see why later. Still, this is build on the esp8266 anyways.

Electronic load circuit

There are various ways to make an eload, but the easiest way to use a N-MOSFET and an opamp that it’s negative feedback drives the Gate-Source of the MOSFET. This is a screenshot of the main circuit:

You can find the kicad project for the eload circuit here:


In the above circuit there’s a first stage opamp that is driven from a DAC. This opamp amplifies the signal with a gain of 2.2 and then the second stage opamp drives the MOSFET. The source of the MOSFET is connected on the PSU under test and the source to an array of parallel 10Ω/1W resistors, which makes an effective 1Ω/10W resistor. The gate and source of the MOSFET are connected in the negative feedback loop of the opamp. This means the opamp will do what opamps do and will “mirror” the voltage on the (+) input to the (-) input. Which means that whatever voltage is applied on the (+) input then the opamp will drive the output in a way that both inputs have the same voltage. Because the gate and the source of the MOSFET are part of the feedback loop, then in this circuit it means the voltage on the source will be the same as the (+) input. Therefore, if you connect a load that is 1Ω then the current will be I=V/R and because R=1Ω, that means Ι=V. So, if 5V are applied in the (+) input then 5A will flow on the resistors.

It’s generally better to have resistors in parallel, because that means that the current will be split to these resistors, which means less heat and no need for extra cooling on a single resistor.

There is also another opamp which buffers the voltage on the negative loop and drives a voltage divider and an RC filter. We need a buffer there in order not to apply any load on the feedback, as the opamp will draw only a tiny current which doesn’t affect the performance of the circuit. The voltage divider is an 1/10 divider that limits the maximum 10V input to the max 1V allowed ADC input of the esp8266. Then there is a first grade RC filter with a pole at ~1Hz to filter out any noise, as we only care about the DC offset.

As the opamp is powered with a +10V then it means that if it’s a rail-to-rail opamp then we can have as much as 10A on the load. The circuit is design for a bit less though. But, let’s see the components.


For this project I’ve decided to build a custom PCB board instead of using a prototype breadboard and connect the different components with solder. The reason for that was “low noise” for the ADCs and DACs. Well, that didn’t work eventually, but still it’s a nice board. The main components I’ve used are:


This is the main component. This is the esp8266 module with 4MB flash and the esp8266 core which can run up to 160MHz. It has two SPI interfaces, one used for the onboard EEPROM and one it’s free to use. Also it has a 12-bit ADC channel which is limited to max 1V input signals. This is a great limitation and we’ll see later why. You can find this on ebay sold for ~1.5 EUR, which is dirt cheap.


As you see the vendor is the AI-THINKER and the model is ESP8266MOD and most of the times this is what you find in ebay, but you may also find modules with no markings.


This is an IC with 4x low noise single supply, rail-to-rail output opamps and a voltage range up to 16V, which is more than enough for the purpose. You can find the specs here. This costs around 0.85 EUR for single unit in Arrow or other electronic component distributors. I wouldn’t buy these ICs from ebay.


This is a N-channel logic level power MOSFET with a very low Rds(on) (0.018Ω @ 10V). We need a logic level MOSFET because the Vgs needs to be as low as possible. You can find all the specs here. Of course you can use any other MOSFET you like as long as it has the same footprint. That costs approx 1 EUR, but you can use a cheaper one.


This is a low noise 12-bit DAC with an internal VREF of 2.048V, a 2x gain capability and it also has an SPI interface. This thing is a beast. I love it. Easy to use and excellent precision for this kind of projects. This costs ~1.8 EUR. Well, yeeeeeah it’s a bit expensive, I know. We’ll get there soon.


This is a USB to serial IC that supports also the extra hardware signals like DTR and RTS. These are needed actually to simplify the flashing procedure of the esp8266, so you don’t have to use jumpers to put the chip in to programming mode. You can see the trick is used in the schematics, you just need two transistors and two resistors. This costs ~4 EUR… Another expensive component. Not good, not good. There are also other cheaper ones like the CP2102, but they are rare to find because they are very popular in the Chinese vendors that build those cheap USB to UART boards.


For the power supply we need a couple of stuff. Because the board is USB powered we get the 5V for free from the USB. But we also need 3.3V and 10V. For the first one it’s easy we just need a voltage regulator like the AMS1117-3.3 (or any other variation of the 1117 you can find). For the +10V that are needed to drive the opamps it’s a bit more complicated and for this purpose you need a charge pump IC like the SP6661. This IC can be connected in a way that can double it’s input voltage; therefore if we use the +5V from the USB we can get +10V. The cost of the SP6661 is ~1.5 EUR.

Project source code

The source code is really simple, because I’ve used the arduino libs for the esp8266, therefore I didn’t write much code. Although, for prototyping and also to do quick stuff the arduino libs are nice, I don’t like using them and I prefer to write everything baremetal. Actually, this is how I’ve started the project but at some point I figured out that it needed quite much time for a stupid project. Therefore, I decided to go with the arduino libs.

Originally, I’ve started with the cnlohr’s esp82xx, which although it’s an excellent start template to start with and it has quite a few nice things in there, like a custom small file system and websockets; at the same time it’s a bit tight to it’s default web interface and it’s quite an effort to strip out all the unused stuff and add your own things. Therefore, after already spending some time with this template and bulding on top of it, I’ve decided to go with the arduino libs for the esp8266 because in that case I just had to write literally a few lines of code.

You can find the source code here:

Also the kicad files are in there, so you can edit/order your own board. Read the README.md file for more info how to build the binary and upload the web server files.

Web interface and functionality

Before I’ve build the PCB I’ve tested most of the functionality on a breadboard, just to be sure that the main stuff is working. 

Noise, noise, noise

Well, although the esp8266 has an ADC, there are two problems. First it’s maximum input voltage is 1V, which makes it prone to noise from external interference. Second it’s crap. When esp8266 is using it’s WiFi radio to transmit then there’s a lot of noise in the ADC, so much noise that it makes the ADC readings completely unusable even with analog external filters or software low pass or IIR filters. The error can even as much as 4-bit and that’s total crap, especially on low ADC values. Here we can see an image that when the DAC value is 0, then the ADC input is that crap signal.

That’s more than 200mV of noise. But, if I disconnect the RC filter output from the ADC and just probe it, then I get this output.

The noise drops to 78mV, which is still high, but much better than before. Therefore, you see that there’s a lot of noise in the system and especially there’s a lot of noise when the RC filter output is connected on the ADC; which means that the esp8266 creates a lot of noise itself. That’s really bad.

Anyway, the components are already expensive and the price is getting out of the scope for a stupid project, therefore I prefer having this crap ADC input instead of spending more for a low noise SPI 12-bit ADC. In the end of the article I will tell you what is the best option and one of my next stupid projects.

Web interface

That was what the whole project was made for. The web interface… I mean, ok, all the rest of the things could be made much less expensive and with better results in terms of noise, but let’s proceed with this. This is the web interface that is stored in the esp8266 upper 1M flash area.

The upper half of the screen is the ADC read value (converted to volts) and the lower half is the output value of the DAC (also converted to volts). The DAC value is very accurate and I’ve verified with my Fluke 87V in the whole range. I’ve also added a libreoffice calc sheet with the measured values. In real time the ADC value is changing constantly on the web interface, even with the RC filter and also a software low pass filter.

The web interface is using websockets in the background. There are many benefits on using websockets and you can find a lot of references in the internet. The days before websockets, for similar projects, I had to use ajax queries, which are more difficult to handle and also they can’t achieve the high speed comms of the websockets. There are a couple of excellent videos with websockets and esp8266, like this one here.

To control the current that will flow to the load resistors you just need to drag the slider and select the value that you want. You can actually only trust the slider value and just use the ADC reading as an approximate value and just verify that the Vgs is close to the wanted value.

This is the setup on my workbench that I’ve used to build the circuit and run the above tests.

You may see that I’ve solder a pin header pin on the ADC of the esp8266 board. That’s because the Lolin esp8266 board has a voltage divider in the ADC pin in order to protect the input from the 3V3 voltage you may insert as this is the reference voltage for the board.

PCB board

As I’ve mentioned I’ve created a PCB board for that and I’ve ordered it from seeedstudio, which by the time had the cheapest price. So for 5 PCBs, including shipping cost me 30 EUR. This is a 3D image from kicad, but the real one is black. I’ll post it when I get it and assemble it.

As you can see I’ve exported the same pinout as the Lolin board to make it usabke also for other tasks or tests. The heatsink will be much larger as 9A is quite a lot, but I couldn’t find larger 3D model (I think there’s a scale factor for the 3D model in kicad when you assign the 3D object to the footprint, but anyway).

The maximum current can be set up to 4.095 * 2.2 = ~9A. This is the 2.048 VREF of the DAC, multiplied by 2 with the DAC’s gain and multiplied by 2.2 from the gain of non-inverting opamp amplifier stage. Beware that if you tend to use that high current then you need a very good cooling solution. One way to limit the current in order not to make a mistake is to change the code and use the 1x gain and change also the values in the eload.js javascript file in the web interface. That change will limit the current to 4.5A.


Well, this was a really stupid project. It’s actually a failure in terms of cost and noise. It would be much better to use a microcontroller and replace most of the parts in there. For example instead of using a DAC, the USB-to-serial IC or an external ADC for less noise, you could use a microcontroller that has all those things integrated and also costs much less. For example a nice mcu for this would be the STM32F373CBT6 which has two 16-bit ADCs, two 16-bit DACs, supports USB device connection and other peripherals to use for an SPI display for example. This controller costs only around 5 EUR and it could replace a few parts. Also, you can implement a DFU bootloader to update both firmwares (STM32 and esp8266). Therefore, this might be a future stupid project upgrade. Actually, I’m trying to find an excuse to use the STM32F373CBT6 and build something around it…

Finally the prototype worked for me quite nice and I’ll wait for the board to arrive and test it with my PSU and see how accurate it is. I can’t see any reason why should anyone build this thing, but this is why stupid projects are all about, right?

Have fun!

Adding armbian supported boards to meta-sunxi Yocto (updated)


Yocto is the necessary evil. Well, this post is quite different from the others, because it’s more related with stuff I do for living, rather fun. Still, sometimes they can be fun, too; especially if you do something other than support new BSPs. Therefore, this post is not much about Yocto. If you don’t know what Yocto is, then you need to find another resource as this is not a tutorial. On the other hand if you’re here because the title made you smile then go on.

Probably you’re already know about the allwinner meta layer, meta-sunxi. Although sunxi is great and they’ve done a great job the supported boards are quite limited. On the other hand, armbian supports so many boards! But if you’re a Yocto-man then you know that this doesn’t help much. Therefore, I thought why not port the u-boot and kernel patches to the meta-sunxi layer and build images that support the same allwinner boards as armbian?

And the result was this repo that does exactly that. Though it’s still a work in progress.


This repo is actually a mix of the meta-sunxi and armbian and only supports H2, H3 and H5 boards from nanopi and orange-pi. The README.mdis quite detailed, so you don’t really need to read the rest post to bring it up and build your images.

More details please?

Yes, sure. Let’s see some more details. Well, most of the hard work is already done on armbian and meta-sunxi. In the armbian build, they have implemented a script to automatically patch the u-boot and the kernel and also all the patches are compatible with their patch system. Generally, the trick with armbian is that actually deletes most of the files that it touches and apply new ones, instead of patching each file separately. Therefore, the patches sizes are larger but on the other hand is much easier to maintain. It’s a neat trick.

The script that is used in armbian to apply the patches is in lib/compilation.sh. There you’ll find two functions, advanced_patch() and process_patch_files() and these are the ones that we would like to port to the meta-sunxi layer. Other than that, armbian uses the files in config/boards/*.conf to apply the proper patches (e.g. default, next, dev). Those are refer to the  patch/kernel. There, for example, you’ll find that sunxi has the sunxi-dev, sunx-next and sunxi-next-old folders and inside each folder there are some patches. If you build u-boot and and kernel for a sunxi-supported board, then you’ll find in the output/debug/output.log and output/debug/patching.log which patches are used for each board.

Therefore, I’ve just took the u-boot and kernel patches from armbian and implemented the patching system in the meta layer. To keep things simple I’ve added the patch functions in both u-boot and kernel, instead of implement a bbclass that could handle both. Yeah, I know, Yocto has a lot of nice automations, but some time it doesn’t worth the trouble… So in my branch you’ll find the do_patch.sh script in both recipes-bsp/u-boot/do_patches.shand recipes-kernel/linux/linux-stable/do_patch.sh. Both script share the same code, which is the code that is used also from armbian. The patches for u-boot and the kernel are in a folder called patches in the same path with the scripts.

Last but not least, I’ve also to add the option to create .wic.bz2 and .bmap images. Please use them. If you want lightning-fast speed when you flash images to SD card or eMMC.


If you want to use Yocto to build custom destributions/images for allwinner H2, H3 and H5, then you can use this meta layer. This is just a mix of the meta-sunxi layer and the patch system from armbian, which offers a much more wider board support. For now I’ve ported most of the nano-pi boards that use H2, H3 and H5 cpus and soon I’ll do the same for the orange-pi board (update: done). For the rest boards (A10 etc) you can still use the same layer.

Also support for wic images and bmap-tools is a good to have, so use it wherever you can.

Have fun!

Driving an ILI9341 LCD with an overclocked stm32f103 (updated)


LCDs… I think LEDs and LCDs are probably the most common wet dream of people who like playing with embedded. I mean, who doesn’t like blinking LEDs and furthermore who doesn’t like to draw graphics with a microcontroller on an LCD? If you don’t press Alt+F4 now please.

LEDs are easy. Toggling a pin or flashing a LED is the first breath of every project. There you know that your microcontroller’s heart is beating properly and it doesn’t have arrhythmia. Then, driving a character LCD is a bit more harder, but still easy; but driving a graphic LCD is definitely more harder, especially if you’re starting from the scratch. Do you have to start from the scratch, though? Nah… There are many projects out there and also this is another one.

OK, so what’s the motivation behind this, if there are so many projects out there? For me it was the fact that I don’t like to use all these arduino-like stuff with my STMs, I don’t like HAL and I couldn’t find a proper cmake project that builds out of the box, without special dependencies like specific IDEs, compilers e.t.c. With this project you just download cmake and a gcc compiler, point the cmake toolchain to your gcc compiler and run build. Then it works… (maybe)

Of course, if you are regular customer here, there’s no need to say that this is a completely stupid project. It does nothing.  No, really. You won’t see any fancy graphics that other people post with their STMs in youtube. You’ll see just a yellow screen. Why? Because I just wanted to benchmark and have a template to use for any other project.

Note: I’ve updated the code and post, because I’ve added support for the xpt2046/ads7843 touch controller. I’ve used SPI2 with DMA to read the touch sensor and also I’m using the /PENIRQ interrupt pin and not polling.

Overclocking, SPI, DMA and other fancy buzzwords

If you found this by searching on the web, then you’re probably here because you know exactly what you want. SPI & DMA!!1! The reason that I like bluepill stm32 boards is that the have a lot of DMA channels and they are dirt-cheap. On top of that you can overclock it up to 128MHz!

So, why DMA is important? Well, I won’t bother you with the details here, if you need to know more than it’s much much faster, then you need to do some web-searching for the specific details, as there are people that already explain these stuff better than me. The fact is that by using DMA on SPI’s tx/rx the transfer speed sky-rockets and you can achieve the maximum available bandwidth.

On the other hand, overclocking is more interesting. The stm32f103 can be easily overclocked by just changing the PLL value. Then you can increase the main clock from 72MHz to 128MHz. Not bad, right? Especially if you think that a project which drives a graphic LCD will benefit a lot from this speed increase. I assume that you’re not crazy enough to do that on a commercial project, but if you do then you’re my hero.

In this project I’ve done some benchmarking with two different system clocks @72MHz and @128MHz and there’s a significant difference as you see later in the benchmarks.



I’m using an stm32f103c8t6 board (bluepill). These modules cost less than €2 in ebay and you may have already seen me using them also in other stupid projects.


There is a very standard LCD module that you find in ebay and it costs around $7. It’s 2.8″ TFT, supports 240×320 resolution, it has a touch interface and an sd card holder. The part name is TJCTM24028-SPI and is the following one:

It’s a beauty, right?

USB-uart module

You need this to print the FPS count every second and also if you want to add your custom commands. You can find these on ebay with less than€1.50 and it looks like that


Finally, you need an ST-Link programmer to upload the firmware, like this one:

Or whatever programmer you like to use.

Pinout connections

As the LCD is not wireless you need to connect it somehow to your stm32. We’re lucky in this case, because both LCD and the stm32 have conductive pins that if they’re connected with each other in the proper way then it may work. The connections you need to do are:

STM32 ILI9341 (LCD)
3.3 VCC
STM32 ILI9341 (touch conrtoller)
STM32 UART module
PA10 (RX) TX

You can power the stm32 from the USB connector.

Project source code

You can download the project source code from here:


All you need to do is install (or have already installed) a gcc toolchain for ARM. I’m using the gcc-arm-none-eabi-7-2017-q4-major, that you can find here. Just scroll down a bit, because there are newer toolchains; but from the tests I’ve done here, it seems this produces the most compact code. Then depending your OS, change the path of the toolchain in the TOOLCHAIN_DIR variable in the project’s file cmake/TOOLCHAIN_arm_none_eabi_cortex_m3.cmake. Last, run ./build.sh on Linux or build.cmd on Windows to build and then flash the bin/hex on the stm32.


The SPI speed of the stm32f103 by using DMA can achieve an SPI clock up to 36MHz when the mcu is running on the default highest frequency which is 76MHz. That’s really fast already compared to other Cortex-M3 mcus with the same frequency (even faster mcus). To use the default 72MHz clock you need to comment out line 47 in main.c here, otherwise the clock will set to 128MHz.

This is a capture of CLK/MOSI when sending the byte 0xAA.

With the above clock settings, the stm32f103 achieves 29 FPS when drawing all the pixels of the screen.

By overclocking the mcu to 128MHz the SPI/DMA speed is much higher. To enable the oveclocking you need to un-comment line 47 here (SystemCoreClock = overclock_stm32f103();), which by default is already enabled.

This is a capture of CLK/MOSI when sending the byte 0xAA.

Now you can see that the SPI clock frequency is now ~63MHz, which is almost the double as the previous. That means that the updating of all the pixels on the screen can be done in a rate of 52 fps, which is quite amazing for this €2 board.

Touch controller (UPDATE)

I’ve lost my sleep knowing that I didn’t implemented the touch control interface, therefore I’ve updated the code and added support for the touch controller and also a calibration routine.

So, now there are two function modes. The first one (and default) is the benchmark mode and the other mode is the calibration. During the calibration mode you can calibrate the sensor and you need to do that if you want to retrieve pixel x/y values. Without calibration you’ll only get the adc sensor values, which may or may not be what you want.

To enter the calibration mode you need to send a command to the uart port. The supported commands are the following:

     BENCH : Benchmark mode
     CALIB : Calibration mode

    0 : Disable FPS display
    1 : Tx FPS on UART
    2 : Tx FPS on UART and on display

      0 : Do not Tx X/Y from touch to UART
      1 : Tx X/Y from touch to UART

The default values are MODE=BENCH, FPS=0, TOUCH=0. Therefore to enter to the calibration mode send this command to the UART: MODE=CALIB.

The calibration routine is very simple, so do not expect fancy handling in there. Even the switch statement in `ili9341_touch_calib_start()` is not needed as it’s a completely serial process. I was about to implement a state machine, but it didn’t worth it, but I left the switch in there.

So, when you enable the calibration mode, you’ll get this screen.

Then you need to press the center of the cross. Behind the scenes, this code is in the ili9341_touch_calib.c file, inside the ili9341_touch_calib_start() function, which at the state STATE_DRAW_P1 draws the screen and then waits in STATE_WAIT_FOR_P1 for the user to press the cross. I’ve  added bouncing control in the  xpt2046_polling() function, but generally the xpt2046 library doesn’t have. So the xpt2046_update() function which updates the static X/Y variables of the lib doesn’t have de-bouncing. The reason for this is that this provides a generic solution and some times de-bouncing is not always wanted, so if it does it can be implemented easily.

Anyway after pressing the point on the screen then other 2 points will be appear and after that the code will calculate the calibration data. The calibration is needed if you want to get the touch presses expressed in pixels that correspond to the screen pixels. Otherwise, the touch sensor only returns 12-bit ADC values, which are not very useful if you need to retrieve the pixel location. Therefore, by calibrating the touch sensor surface to the LCD, you can get the corresponding pixels of the pixels that are just under the pressure point.

The algorithm for that is in the touch_calib.c file and it actually derives from an application note from TI which is called “Calibration in touch-screen systems” and you can probably find it in this link. The only thing worth a note is that there are two main methods for calibration which are the 3 and 5-point. Most of the times you find the 5-point calibration method, but also the 3-point gives a good result. There a calc file in the sources source/libs/xpt2046-touch/calculations.ods that you can use to emulate the algorithm results. In this file the (X’1,Y’1), (X’2,Y’2), (X’3,Y’3) are the points that you read from the sensor (12-bit ADC values) and (X1,Y1), (X2,Y2), (X3,Y3) are the pixels in the center of each cross you draw on the screen. Therefore, in the ods file I’ve just implemented the algorithms of the pdf file. The same algorithms are in the code.

After you finish with the calibration, the calibration data are printed out in the serial port and also are stored in the RAM and they are used for the touch presses. After a reset those settings are gone. If you want to print the ADC and pixel values (after the calibration) on the serial port then you can send the TOUCH=1 command on the UART. After that every touch will display both values.

Finally, if you enable to show the FPS on the LCD you will see something weird.

This displays the FPS rate of drawing all the pixels on the screen and you see a tearing in the number, which is 51. This happens because the number print overwriting that part of the screen when is displayed and also it gets overwritten from the full display draw. The reason for that is to simplify the process and also do not insert more delay while doing the maths.

Generally, I think the code is straight forward and it serves just a template for someone to start writing his code.


stm32f103, rocks. That’s the conclusion. Of course, getting there is not as easy as using an arduino or a teensy board, but the result pays off the difficulty of getting there. And also when you get there is not difficult anymore.

Again, this is a really stupid and completely useless project. You won’t see any fancy graphics with this code, but nevertheless is a very nice template to create your own graphics by using an optimized SPI/DMA code for the stm32f103 and also with the overclocking included. That’s a good deal.

Of course, I didn’t wrote the whole code from the scratch, I’ve ported some code from here which is a port from the Adafruit library from the arduino. So it’s a port of a port. FOSS is a beautiful thing, right?

Have fun!

Added macros to CuteCom


The last years I’m only using Linux to my workplace and since I’ve also started using only Linux at home, too; I’ve found my self missing some tools that I was using with Windows. That’s pretty much the case with everyone that at some point tries or tried the same thing. Gladly, because many people already do this more and more the last years, there are many alternatives for most of the tools. Alternatives can be either better or worse than the tool you we’re using, of course, but the best thing with FOSS is that you can download the code and implement all the functionality you’re missing yourself. And that’s great.

Anyway, one the tools I’ve really missed is br@y’s terminal. I assume that every bare metal embedded firmware developer is aware of this amazing tool. It’s everything you need when you develop firmware for a micro-controller. For embedded Linux I prefer putty for serial console, though. Anyway, this great tool is only for Windows and although you can use Wine to run it on Linux, soon you’ll find out that when you develop USB CDC devices then the whole wine/terminal thing doesn’t work well.


There are many alternatives for Linux (console and gui) terminal apps and I’ve used most of those you can find in the first 7 pages of google results. But after using Bray’s terminal for so many years, only one of them seem to be close enough to that; and that’s CuteCom. The nice thing with CuteCom is that it’s written with Qt, so it’s a cross-platform app and also Qt is easy and nice to write code.

Tbh, I’m familiar with Qt since Trolltech era and after the Nokia era. I’ve written a lot of code in Qt, especially for the Nokia n900 phone and Maemo OS. But since Nokia abandoned Maemo and also Meego (former Tizen), I’ve started doing other stuff. I was really disappointed back then, because I believe n900 and Maemo could be the future until everything went wrong and Nokia abandoned everything and adopt Windows for their mobiles. I’ll moan another time for how much Microsoft loves Linux.

Anyway, Qt may also affected my decision to go with CuteCom, but the problem was that the functionality that I was using most from Bray’s terminal wasn’t there. And I mean the macros. Let me explain what macros are. Macros are just predefined data that you can send over the serial port by pressing the corresponding macro button. And also you can use a timer for every macro and send it automatically every x programmable intervals in milliseconds. That’s pretty much all you need when you developing a firmware. But this functionality was not implemented yet in CuteCom.

Therefore, I had to implement it myself and also find an excuse to write some Qt again.


I’ve branched CuteCom from github and added the macro functionality in here:


I’ve done a pull request, but I can’t tell if it gets merged or not. But anyways if you are a macro lover like myself, then you can download it from the above branch.

Edit: Macros are now merged to the master git branch, thanks to Meinhard Ritscher.


I’ll add here a couple of notes how to build it, because it’s not very clear from the README file. You can either clone the repo and use QtDesigner to load the project and and build it, or you can use cmake. In case you use cmake you need the Qt libs and header (version >= 5) in your system.

If you don’t have Qt installed then you need to do the following (tested on Ubuntu 18.04):

git clone https://github.com/neundorf/CuteCom
cd CuteCom
sudo apt install cmake qtbase5-dev libqt5serialport5-dev
cmake .
make install

This build and install cutecom in /usr/local/bin/cutecom. Then you can create a desktop launcher

gedit ~/.local/share/applications/CuteCom.desktop

And add the  following:

#!/usr/bin/env xdg-open
[Desktop Entry]
Name=CuteCom Comment=Terminal

If you have installed another Qt SDK then you can just point cmake there and build like this:

cmake . -DCMAKE_PREFIX_PATH=/opt/Qt/5.x/gcc_64
sudo make install

This will be installed in `/usr/local/bin/cutecom` (try also `which` just to be sure…)

Finally, you’ll need a desktop icon or a launcher. For Ubuntu you can create a `CuteCom.desktop` file in your `~/.local/share/applications` path and paste the following:

#!/usr/bin/env xdg-open
[Desktop Entry]
Exec=env LD_LIBRARY_PATH=/opt/Qt/5.11.1/gcc_64/lib /usr/local/bin/cutecom

The result should look like this:

Have fun!

Joystick gestures w/ STM32


Time for another stupid project, which adds no value to the humanity! This time I got my hands on one of these dirt-cheap analog joysticks on ebay that cost less than €1.5. I guess that you can make a lot of projects with them by using them as normal joysticks, but for some reason I wanted to do something more pointless than that. And that was to make a joystick that outputs gestures via USB.

By gestures I mean, that instead of sending the ADCs in realtime, instead support only the basic directions like up, down, left, right and button press and then send the gesture combinations through USB. If you’re using mouse gestures to your web browser, then you know what I mean.

OK, let’s see the components and result.



I’m using a stm32f103c8t6 board. These modules cost less than €2 in ebay and you may have already seen me using them also in other stupid projects.


You can find those joysticks in ebay if you search for a joystick breakout for arduino. Although they’re cheap the quality is really nice and also the stick feeling is nice, too. This is how it looks like

As you can see from the image there is +5V pin, which of course you need to connect to your micro-controller’s (the stm32 in this case) Vcc; which 3V3 and not only to +5V. The VRx pin it the x-axis variable resistor, the VRy is the y-axis variable resistor and the SW is the button. The switch output is activated if you press the joystick down. The orientation of the x,y axis is valid when you place the joystick to your palm while you can read the pin descriptions.


Finally, you need an ST-Link programmer to upload the firmware, like this one:

Or whatever programmer you like to use.

USB-uart module

You don’t really need this for the project, but if you like to debug or add some debugging message your own, then you’ll need this. You can find these on ebay with less than€1.50 and it looks like that

Making the stupid project

I’ve build the project on a breadboard. You can use a prototype breadboard if you want to make this a permanent board. I’ve added support for both USB and UART in the project. The USB, of course, is the easiest and preferred way to connect the device to your computer, but the UART port can be used for debugging. So, let’s start with a simple schematic of how everything is connected. This a screenshot from KiCad.

Therefore, the PA0 and PA1 are connected to VRx and VRy and they are set as ADC inputs. In the source code I’m using both ADC1 and ADC2 channels at the same time. The ADC1 channel is also using DMA, which is not really necessary as the conversion rate doesn’t need to be that fast, but I’m re-using code that I’ve already written for other projects. The setup of the ADCs is in the hw_config.c file in the source code. The ADCs are continuously convert the VRx and VRy inputs in the background as they are based on interrupts, but only every JOYS_UPDATE_TMR_MS milliseconds the function joys_update() updates the algorithm with the last valid values. The default update rate is 10ms, but you can trim it down to 1ms if you like. You can also have a look in the joys_update() function in joystick.c and trim the JOYS_DEBOUNCE_CNTR and JOYS_RECOGNITION_TIME_MS to your needs. The first one controls the debounce sensitivity and the second one the timeout of the gesture. That means the time in ms that the recognition timer will expire after the joystick is released in the center position and then send the recorded gesture.

The source code can be found here:

To build the code you need cmake and to flash it you need ST-Link. Have a look in the README.md file in the repo for details. Also you need to point to your arm toolchain.

Because I’m using the -flto -O3 flags, you need to make sure that you use a GCC version newer that 4.9

I’ve tested the code with this version:

arm-none-eabi-gcc (GNU Tools for Arm Embedded Processors 7-2017-q4-major) 7.2.1 20170904 (release) [ARM/embedded-7-branch revision 255204]

The code size is ~14.6KB.

Finally, this is an example video of how the joystick gesture performs.

Have fun!

GCC compiler size benchmarks


Compilers, compiliers, compilers…

The black magic behind producing executable binaries for different  kind of processors. All programmers use them, but most of them don’t care about the internals and their differences. Anyway, this post is not about the compiler’s internals though, but how the different versions perform regarding the size that is produced.

I’ve made another benchmark few months ago here, but that was using different compilers (GCC and clang) and different C libraries. Now I’m using only GCC, but different versions.

Size doesn’t matter!

Well, don’t get me wrong here, but sometimes it does. Typical scenario is when you have a small microcontroller with a small flash size and your firmware is getting bigger and bigger. Another scenario is that you need to sacrifice some flash space for the DFU bootloader and then you realize that 4-12K are gone without writting any line of code for you actual app.

Therefore, size does matter.

Compiler Flags

Compililers come with different optimisation flags and the -Os flag commands the compiler to optimize specifically for size.

OK, so the binary size matters only when you the -Os!

No, no, no. The binary size matters whatever optimisation flag you use. For example your main need may be to optimise for performance. An example is if you’re using a fast toggle gpio, e.g. implementing a custom bit-banging bus to program and interface an FPGA (like the Xilinx’s selectmap). In this case you may need the -O1/2/3 optimisation more than -Os, but still the size matters because you’re limited in flash space. So, two different compiler versions may have even 1KB difference for the same optimization level and that 1KB may be critical someday to one of your projects!

And don’t forget about the -flto! This is an important flag if you need size optimisation; therefore, all the benchmarks are done with and without this flag also.


I’ve benchmarked the following 9 different GCC compiler versions:

  • gcc-arm-none-eabi-4_8-2013q4
  • gcc-arm-none-eabi-4_9-2014q4
  • gcc-arm-none-eabi-5_3-2016q1
  • gcc-arm-none-eabi-5_4-2016q2
  • gcc-arm-none-eabi-5_4-2016q3
  • gcc-arm-none-eabi-6_2-2016q4
  • gcc-arm-none-eabi-6-2017-q1-update
  • gcc-arm-none-eabi-6-2017-q2-update
  • gcc-arm-none-eabi-7-2017-q4-major

It turned out that all the GCC6 compilers performed exactly the same; therefore, without reading the release notes I assume that the changes have to do with fixes rather optimisations.

The code I’ve used for the benchmards is here:

This is my next stupid project and it’s not completed yet, but still it compiles and without optimisations creates a ~50KB binary. To use your toolchain, just change the toolchain path in the `TOOLCHAIN_DIR` variable in the `cmake/TOOLCHAIN_arm_none_eabi_cortex_m3.cmake` file and run ./build.bash on Linux or build.cmd on Windows.


These are the results from compiling the code with different compilers and optimisation flags.


flag size in bytes size in bytes (-flto)
-O0 51908
-O1 32656
-O2 31612
-O3 39360
-Os 27704


flag size in bytes size in bytes (-flto)
-O0 52216 56940
-O1 32692 23984
-O2 31496 22988
-O3 39672 31268
-Os 27563 19748


flag size in bytes size in bytes (-flto)
-O0 51696 55684
-O1 32656 24032
-O2 31124 23272
-O3 39732 30956
-Os 27260 19684


flag size in bytes size in bytes (-flto)
-O0 51736 55724
-O1 32672 24060
-O2 31144 23292
-O3 39744 30932
-Os 27292 19692


flag size in bytes size in bytes (-flto)
-O0 51920 55888
-O1 32684 24060
-O2 31144 23300
-O3 39740 30948
-Os 27292 19692

gcc-arm-none-eabi-6_2-2016q4, gcc-arm-none-eabi-6-2017-q1-update, gcc-arm-none-eabi-6-2017-q2-update

flag size in bytes size in bytes (-flto)
-O0 51632 55596
-O1 32712 24284
-O2 31056 22868
-O3 40140 30488
-Os 27128 19468


flag size in bytes size in bytes (-flto)
-O0 51500 55420
-O1 32488 24016
-O2 30672 22080
-O3 40648 29544
-Os 26744 18920


From the results it’s pretty obvious that the -flto flag makes a huge difference in all versions except GCC4.8 where the code failed to compile at all with this flag enabled.

Also it seems that when no optimisations are applied with -O0 then the -flto instead of doing size optimisation, actually created a larger binary. I have no explain for that, but anyways it doesn’y really matter, because there’s no point in using -flto at all in such cases.

OK, so now let’s get to the point. Is there any difference between GCC versions? Yes, there is, but you need to see that in different angles. So, for the -Os flag it seems that the GCC7-2017-q4-major produces a binary which is ~380 bytes smaller without -flto and ~550 bytes with -flto from the second better GCC version (GCC6). That means that GCC7 will save you from changing part to another one with a bigger flash, only if your firmware exceeds the size by those sizes with GCC6. But, what are the changes, right? We’re not talking about 8051 here…

But wait… let’s see what happens with the -O3 though. In this case using the -flto flag GCC7 creates a binary which is 1KB smaller compared to the GCC6 version. That’s big enough and that may save you from changing to a larger part! Therefore, the size matters also for other optimisation levels like the -O3. This also means that if your code size getting larger and you need the max performance optimisation, then the compiler version may be significant.

So, why not use always the latest GCC version?

That’s a good question. Well, if you’re writing your software from the scratch now, then probably you should. But if you have an old project which is compiling with an old GCC version, then this doesn’t mean that it will also compile with -Wall in the newer version. That’s because between those two versions there might be some new warnings and errors that doesn’t allow the build. Hence, you need to edit your code and correct all the warnings and errors. If the code is not that big, then the effort may not be that much; but if the code is large then it means that you may need to spend much time on it. It’s even worse if you’re porting code that is not yours.

Therefore, the compiler version does matter for the binary size for all the available optimisation flags and depending your code size and processor you might need to choose between those versions depending your needs.

Have fun!