Libre Computer Cottonwood Alta TensorFlow Lite MobileNetV1 Guide

Hi!

sorry for incomplete previous message.

I perfomed action below on fresh headless debian 12 and it just locked after attempt to enable NPU. Is this because the firmware was not updated right. When I updated the firmware it seems my board is not true Alta but Alta-prepod?

git clone GitHub - libre-computer-project/libretech-wiring-tool
cd libretech-wiring-tool
make
sudo ./ldto enable npu

Hi there. I clarified that my Alta is not a pre-pod version but normal 1.0 I believe.
So that is not problem that my screen is frozen after enabling the NPU. Perhaps this is only HDMI blocked, which perhaps for now is fine for me. As soon I connect something to serial port, which should keep communicating even if my HDMI is frozen, right?

Please instruct whether I am right since I am very determined to become active contributor into NPU activities.
Regards,
Nik.

Hi.

I didn’t have such problem while enabling npu, so I might not be any help to you, but if you could connect to Alta anyhow, have you disabled npu and checked if it is disabled by following commands?

sudo ldto disable npu
ldto active

Hi !Thanks for your response! I do not know how to connect to the serial port and if debian 12 uses it for console.

Yes, maybe it would be a good idea try to type the text you suggested, in “blind” mode while the display is not working. Thanks. Also, could you please advise me what OS do you use so you do not have issues with this? I tried many times, even changed images, but all the same result.

I’m using Debian 12 like you.

Did you flash your OS with a bit accurate flashing tool? At first I had login problem with BalenaEtcher and transfered to Win32DiskImager.

I thought you meant ethernet connection when you said serial connection. Hadn’t you enabled sshd before enabling npu?

Hi Charlie, I did not have login problem but I should try that bit accurate approach, I guess. I tried to configure the WIFI as per description on this website but did not have success, so I just used ethernet to get access to internet, since the board works straight without any ethernet configuration. What is SSHD?, I did not do any of that.

I connect to Alta with ssh. So I needed to enable sshd, the ssh server. If you can connect other way, you don’t need sshd. Hope you can use npu as you like to.

Howdy ! Just received my Alta and the board is a beautiful thing, some of the best looking boards I’ve ever seen.
I’m trying to run this guide but it seems that perhaps mesa teflon branch has been changed - there are no tests src/gallium/frontends/teflon/tests, certainly not test_conv2d.py nor classification.py . Is there perhaps another branch this was meant for ?

Anton, I had success using the teflon-tp branch as it includes all of the test files. I don’t believe this branch is being actively updated at this point but it’s a good starting point to verify operation.

The MobileNetV1 model this guide is written for is a classification model. Will the Teflon delegate also work with an Object Detection model like MobileNetV2? Does the model need to be compiled specifically for running with the delegate?

Yes, MobileNetV2 is supported along with SSDLite.

https://docs.mesa3d.org/teflon.html

@librecomputer
I am having some issues very similar to what @charlie was having above:

MESA: error: get_param:40: get-param (1f) failed! -22 (Invalid argument)
MESA: error: get_param:40: get-param (20) failed! -22 (Invalid argument)
MESA: error: get_param:40: get-param (21) failed! -22 (Invalid argument)
MESA: error: get_param:40: get-param (22) failed! -22 (Invalid argument)
MESA: error: get_param:40: get-param (23) failed! -22 (Invalid argument)
We need at least 1 NN core to do anything useful.
Aborted

I started getting these errors after deleting and rebuilding the main mesa branch here: https://gitlab.freedesktop.org/mesa/mesa.git as directed by the link you provided. I get these errors even with the MobileNetV1 model.

If go back to the teflon-tp branch from Tomeu, I have no problems with MobileNetV1 but it doesn’t work with V2 or SSDLite, I am getting a segmentation fault. I have tried a few other branch’s like teflon and teflon-ci and those yield the same mesa errors as above. What am I missing?

hi, @charlie
I meet same ERROR when i load the delegate, do you solve this problem?

We are preparing a demo at Embedded World and will update the guide next week.

Updated on 2024-04-13

Now that librecomputer updated their official kernel, the following patch is not necessary.

===

Hi, I overcame the error after patching etnaviv kernel module like this:

git clone https://github.com/libre-computer-project/libretech-linux.git -b linux-6.1.y-lc --single-branch --depth=1
cd libretech-linux
patch -p1 < etnaviv_kernel_module_6.1.83.patch
sudo cp include/uapi/drm/etnaviv_drm.h /lib/modules/6.1.83-14793-g05e363bdd9a7/build/include/uapi/
cd drivers/gpu/drm/etnaviv
make
sudo rmmod etnaviv.ko
sudo insmod etnaviv.ko

The etnaviv_kernel_module_6.1.83.patch file was made by applying Tomeu’s patch and my additional modification. Please look into the patch if it has no vulnerability before you patch and rebuild the module.

1 Like

We have updated the instructions according to the changes necessary. Please update to the latest kernel. We will update the test examples later.

Hi, @charlie

Thanks your patch info, i can success inference the model without error,

but how long your inference time in ms, i got 195.975 ms to inference mobilenet_v1_1.0_224_quant.tflite, it seem abnormal and slow compare with Tomeu data

best,

This is my result after upgrading to librecomputer’s official kernel 6.1.85-15205, which is similar to when I applied my patch:

Loading external delegate from ../mesa/build/src/gallium/targets/teflon/libteflon.so with args: {}
0.866667: military uniform
0.031373: Windsor tie
0.015686: mortarboard
0.007843: bow tie
0.007843: academic gown
time: 10.566ms

Loading external delegate took about 25 seconds. I’ve been testing on Debian 12.

1 Like

@librecomputer @charlie Have you tried running object detection using the SSDLite MobileDet model? It runs but is very slow, like 500ms. Once it loads the teflon delegate, I get this message: “INFO: Created TensorFlow Lite XNNPACK delegate for CPU”. I don’t believe it should be doing this but I can’t figure out how to disable it. I get great results when running MobileNetV1 classification, like 6ms

If I run the SSDlite model without the delegate, the inference time is only 200ms and I still get the same info message about the XNNpack delegate.