guard spsc queue push with mutex since it's not thread safe #30
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
SPSC (single producer-single consumer) queue documentation about queue
pushfunction:Requires: | only one thread is allowed to push data to the spsc_queueWe had multiple threads calling
HostPipeline::onNewDataonce USB packet arrived, queue push wasn't protected, so the data got released in between multiple threads pushing into queue and main python thread (consumer) consuming the queue.This issue can be reproduced easily on commit e754c2b
w/ options :
./depthai.py -s previewout left right metaout -v /dev/nullwhere each thread is spamming the queue -artificially- every 1 ms to reproduce the clash.
When the issue happens in the terminal it will be printed (most likely) :
Invalid packet stream name!deallocatedbut technically it's subject to undefined behavior due to corruption.
Protecting the queue in both consumer and producer defies the purpose of non locking queue, the purpose of this PR is not replacing the current queue implementation with a locking one, but to solve the current issue in the current context.