I have a project where I'm trying to project a live video feed to a large RGB LED matrix (currently 256x256).
There are several PicoW involved that receive the image data over WiFi. That part's been solved.
Now I need to implement the rest on a Pi 5, by using the directly connected camera, from which I extract frames at 20 fps, reduce the image size to 256x256, then convert that into RGB565 format (i.e. a raw pixel buffer with 16 bit per pixel) so that I can send it to the Picos.
How do I do this in Python?
I've skimmed over the Picamera2 docs, which seems to offer only direct output to various streams and files. I am unsure how I'd get at the frames so that I can resize them and then get the raw pixels.
There are several PicoW involved that receive the image data over WiFi. That part's been solved.
Now I need to implement the rest on a Pi 5, by using the directly connected camera, from which I extract frames at 20 fps, reduce the image size to 256x256, then convert that into RGB565 format (i.e. a raw pixel buffer with 16 bit per pixel) so that I can send it to the Picos.
How do I do this in Python?
I've skimmed over the Picamera2 docs, which seems to offer only direct output to various streams and files. I am unsure how I'd get at the frames so that I can resize them and then get the raw pixels.
Statistics: Posted by tem_pi — Thu Aug 01, 2024 3:17 pm