Quantcast
Channel: Raspberry Pi Forums
Viewing all articles
Browse latest Browse all 7915

Python • Re: Python Threads Intorduces 10msec Delay in SPI

$
0
0
For anyone else, CoPilot gives

What do you think ?


Why your threaded version suddenly takes ~600 ms
✔ 1. pigpio’s SPI/I2C calls are remote procedure calls
When you use pigpio from Python, you’re not talking to hardware directly.
You’re sending commands over a UNIX socket to the pigpiod daemon.

Each SPI byte transfer =

Python → socket write

kernel → pigpiod

pigpiod → hardware

pigpiod → kernel

kernel → Python

In a tight loop, this is fine.
In a threaded loop, the timing becomes unpredictable.

✔ 2. Python threads do not run in parallel
Python’s GIL means only one thread executes Python bytecode at a time.

Your three threads (SPI, I2C0, I2C1) are now competing for the GIL.

This introduces delays of several milliseconds when threads block/unblock.

✔ 3. Linux scheduler adds ~10 ms jitter
On Raspberry Pi OS (non‑RT kernel), the scheduler tick is typically 10 ms.

If a thread blocks (e.g., waiting on a condition variable), it may not be rescheduled until the next tick → 10 ms delay.

This exactly matches your logic analyzer:
10 ms gap between SPI byte transfers.

✔ 4. pigpiod itself uses a 5 ms sampling interval
pigpiod internally processes GPIO updates in batches.
Under load, this can stretch to 5–10 ms.

Your threaded design increases load → pigpiod delays increase.

🧪 Why the non‑threaded version is fast (~500 µs)
Because:

One thread

No GIL contention

pigpiod receives a tight sequence of SPI commands

Linux scheduler never preempts the thread mid‑transfer

pigpiod stays in its fast path

🛠 How to fix it (your options)
✅ Option 1: DO NOT use threads for SPI/I2C
This is the simplest and most reliable.

Your original single‑threaded version is already fast.
Just interleave SPI and I2C calls in one loop.

✅ Option 2: Use pigpio’s bulk transfer instead of byte‑by‑byte
pigpio supports transferring multiple bytes in one call:

python
spi.xfer([0] * 48)
This eliminates 48 round‑trips over the socket.

If your device only supports single‑byte transfers, you can still do:

python
spi.xfer([0]) * 48
but that still incurs 48 RPCs.
Better: use pigpio waves or bb_spi to batch them.

✅ Option 3: Use the pigpio C API (no pigpiod)
If you link against libpigpio (not pigpiod), you avoid the socket overhead entirely.

This gives you deterministic timing and microsecond‑level latency.

✅ Option 4: Use spidev instead of pigpio for SPI
spidev talks directly to the kernel SPI driver.
It is dramatically faster and avoids pigpiod’s batching.

Example:

python
import spidev
spi = spidev.SpiDev()
spi.open(0, 0)
spi.max_speed_hz = 10000000
data = spi.readbytes(48)
This will be orders of magnitude faster and thread‑safe.

✅ Option 5: Use multiprocessing instead of threading
If you must parallelize:

Use multiprocessing.Process, not threads

Each process talks to pigpiod independently

No GIL contention

Linux scheduler treats processes more fairly

This often removes the 10 ms gaps.

❌ What will NOT work
Trying to “tune” Python threads

Increasing thread priority

Using asyncio

Using condition variables differently

Hoping pigpiod will behave like a real‑time driver

None of these solve the underlying scheduling + GIL + RPC latency.

Statistics: Posted by bensimmo — Thu Dec 25, 2025 1:21 pm



Viewing all articles
Browse latest Browse all 7915

Trending Articles