Conversation
…cripts , improvements
|
@greptileai , comprehensively update your review based on the changes above : |
|
@greptileai , comprehensively update your review based on all the changes above : |
|
@greptileai , comprehensively update your review based on the changes above. do not overwrite the schema / flowchart , update your review , update the out of diff review : |
|
@greptileai , comprehensively update your review based on the changes above. do not overwrite the schema / flowchart , update your review , update the out of diff review : |
|
@greptileai , comprehensively update your review based on the changes above. do not overwrite the schema / flowchart , update your review , update the out of diff review : |
|
@greptileai , comprehensively update your review based on the changes above. do not overwrite the schema / flowchart , update your review , update the out of diff review : |
|
@greptileai : comprehensively update your review based on the changes above. do not overwrite the schema / flowchart , update your review , update the out of diff review : |
|
@greptileai : comprehensively update your review based on the changes above. do not overwrite the schema / flowchart , update your review , update the out of diff review : |
|
@greptileai : comprehensively update your review based on the changes above. do not overwrite the schema / flowchart , update your review , update the out of diff review : |
|
@greptileai : : comprehensively update your review based on the changes above. do not overwrite the schema / flowchart , update your review , update the out of diff review : |
|
@greptileai : comprehensively update your review based on the changes above. do not overwrite the schema / flowchart , update your review , update the out of diff review : |
|
@greptileai : comprehensively update your review based on the changes above. do not overwrite the schema / flowchart , update your review , update the out of diff review : |
|
@greptileai : comprehensively update your review based on the changes above. do not overwrite the schema / flowchart , update your review , update the out of diff review : |
|
@greptileai : comprehensively update your review based on the changes above. do not overwrite the schema / flowchart , update your review , update the out of diff review : |
|
@greptileai , comprehensively update your review based on the changes above. do not overwrite the schema / flowchart , update your review , update the out of diff review : |
radioshaq/radioshaq/radio/sdr_tx.py
Outdated
| s = np.asarray(samples_iq) | ||
| if np.iscomplexobj(s): | ||
| i = (np.clip(np.real(s) * 127, -128, 127)).astype(np.int8) | ||
| q = (np.clip(np.imag(s) * 127, -128, 127)).astype(np.int8) | ||
| iq = np.empty(2 * len(s), dtype=np.int8) | ||
| iq[0::2] = i | ||
| iq[1::2] = q | ||
| else: | ||
| iq = np.asarray(s, dtype=np.int8) | ||
| duration_sec = len(iq) / (2.0 * sample_rate) | ||
| iq_bytes = iq.tobytes() | ||
| iq_b64 = base64.b64encode(iq_bytes).decode("ascii") |
There was a problem hiding this comment.
HackRFServiceClient.transmit_iq normalizes IQ synchronously on the event loop
radio_tx.py correctly offloads modulation (e.g. nfm_modulate) to a thread executor and returns a complex64 numpy array. That array is then passed directly to this method, which performs the int8 normalization, iq.tobytes(), and base64.b64encode() all synchronously on the event loop.
For a 10-second NFM clip at 2 MHz, samples_iq contains ~20 million complex64 elements (≈160 MB). The operations on lines 392–401 include:
np.real(s)— 80 MB temporarynp.clip(…) * 127— 80 MB op.astype(np.int8)× 2 — 20 MB eachiq.tobytes()— 40 MB copybase64.b64encode(iq_bytes)— ~53 MB encode
The total wall-clock time is potentially several seconds, blocking every other coroutine in the server.
Consider wrapping lines 390–401 in a thread executor, consistent with how radio_tx.py handles modulation:
loop = asyncio.get_running_loop()
def _normalize():
if isinstance(samples_iq, (bytes, bytearray, memoryview)):
iq = np.frombuffer(samples_iq, dtype=np.int8)
else:
s = np.asarray(samples_iq)
if np.iscomplexobj(s):
i = (np.clip(np.real(s) * 127, -128, 127)).astype(np.int8)
q = (np.clip(np.imag(s) * 127, -128, 127)).astype(np.int8)
iq = np.empty(2 * len(s), dtype=np.int8)
iq[0::2] = i
iq[1::2] = q
else:
iq = np.asarray(s, dtype=np.int8)
return iq.tobytes(), base64.b64encode(iq.tobytes()).decode("ascii"), len(iq) / (2.0 * sample_rate)
iq_bytes, iq_b64, duration_sec = await loop.run_in_executor(None, _normalize)| # Generate a short buffer of silence for playback test. | ||
| silence = np.zeros((frames, 1), dtype="float32") | ||
| # Open a stream for playback and (if supported) capture; immediately close. | ||
| sd.play(silence, samplerate=samplerate, device=device_id, blocking=True) |
There was a problem hiding this comment.
Blocking audio playback on the async event loop
sd.play(..., blocking=True) is a synchronous audio I/O call. Even at only 0.1 seconds of audio it will block the event loop for approximately 100 ms, preventing all other coroutines (incoming requests, health checks, WebSocket streams) from running during that window.
Wrap this in run_in_executor to keep the event loop unblocked:
| sd.play(silence, samplerate=samplerate, device=device_id, blocking=True) | |
| loop = asyncio.get_running_loop() | |
| await loop.run_in_executor( | |
| None, lambda: sd.play(silence, samplerate=samplerate, device=device_id, blocking=True) | |
| ) |
You'll also need import asyncio at the top of this module if not already present.
| tmp = tempfile.NamedTemporaryFile(suffix=suffix, delete=False) | ||
| temp_path = tmp.name | ||
| try: | ||
| tmp.write(content) | ||
| tmp.close() |
There was a problem hiding this comment.
Synchronous file write of up to 100 MB on the async event loop
tmp.write(content) is synchronous disk I/O performed directly on the async request handler. For a large audio file near the 100 MB limit, this write could take a noticeable amount of time (tens to hundreds of milliseconds on a slow disk), stalling all other requests on the event loop.
Consider wrapping the file write in run_in_executor:
import asyncio
loop = asyncio.get_running_loop()
try:
await loop.run_in_executor(None, lambda: (tmp.write(content), tmp.close()))
except Exception:
tmp.close()
Path(temp_path).unlink(missing_ok=True)
raiseOr alternatively, use aiofiles for async file writes if it is already a dependency.
Greptile Summary
This PR integrates HackRF SDR hardware support end-to-end: a new
HackRFDeviceManager/HackRFBrokerpair in the remote receiver service,/tx/toneand/tx/iqHTTP broker endpoints, aHackRFServiceClientfor remote TX delegation, full NFM/AM/SSB/CW modulation and demodulation pipelines (fm.py, analog_mod.py, dsp/analog.py, dsp/nfm.py), and a new normalized mode model (modes.py). A Twilio SMS/WhatsApp webhook handler and several config schema improvements are included.Key changes:
HackRFDeviceManagerowns the USB device handle with anasyncio.Lock;HackRFBrokercoordinates RX/TX handoff via ashould_stop_rxevent so active streams yield to TX requestsReceiverService.stream_frequencyhas been restored with audio demod support (mode, audio_rate_hz, bfo_hz), closing the broken-WebSocket regression from a previous review iterationis_restricted,is_tx_spectrum_allowed) are applied server-side before every/tx/toneand/tx/iqcall — the broker can no longer be used to bypass band-plan enforcementensure_test_stateis now opt-in (called from test fixtures) instead of running unconditionally at module import time, eliminating the silent auth bypass in testsdotenv_settingsdrop insettings_customise_sourceshas been fixed;.envfile loading is restored_to_mono_floatin bothfm.pyandanalog_mod.pycapturesis_integerbefore the stereomean()call, fixing the dead int16 normalization bugSsbDemodulator.demodnow correctly conjugates IQ for LSB modeNew issues found in this iteration:
HackRFServiceClient.transmit_iqnormalizes large complex64 IQ arrays (np.clip, tobytes, base64.b64encode) synchronously on the event loop before POSTing — for a 10-second NFM clip this is ~500 MB of numpy/encode work that can block for several secondssd.play(blocking=True)in the newtest_audio_deviceendpoint stalls the event loop for 100 ms per calltmp.write(content)in the new/send-audioendpoint performs synchronous disk I/O of up to 100 MB on the event loopTX_AUDIT_LOG_PATHfrom the environment, which does not match the pydantic config convention ofRADIOSHAQ_RADIO__TX_AUDIT_LOG_PATH, leaving the file audit trail silently unconfigured for operators who set the path via YAML/configConfidence Score: 3/5
Important Files Changed
Sequence Diagram
sequenceDiagram participant Client participant HQServer as HQ Server (main API) participant RadioTxAgent as RadioTransmissionAgent participant ServiceClient as HackRFServiceClient participant ReceiverSvc as Remote Receiver (server.py) participant Broker as HackRFBroker participant DevMgr as HackRFDeviceManager participant HW as HackRF USB Client->>HQServer: POST /radio/send-audio (WAV file) HQServer->>RadioTxAgent: execute(task) RadioTxAgent->>RadioTxAgent: sf.read() [executor] RadioTxAgent->>RadioTxAgent: nfm_modulate() [executor] RadioTxAgent->>ServiceClient: transmit_iq(freq, iq_complex64, sample_rate) Note over ServiceClient: np.clip/tobytes/base64 [event loop ⚠️] ServiceClient->>ReceiverSvc: POST /tx/iq (JWT + base64 IQ) ReceiverSvc->>ReceiverSvc: _require_broker_auth() ReceiverSvc->>ReceiverSvc: is_restricted() + is_tx_spectrum_allowed() ReceiverSvc->>Broker: request_tx() → stop_rx flag set Broker->>DevMgr: with_device(_blocking_tx) [executor] DevMgr->>HW: stream_hackrf_iq_bytes() HW-->>DevMgr: done DevMgr-->>Broker: complete Broker->>Broker: clear_tx() → stop_rx cleared ReceiverSvc->>ReceiverSvc: log_tx(audit) ReceiverSvc-->>ServiceClient: {"success": true} ServiceClient-->>RadioTxAgent: ok RadioTxAgent-->>HQServer: {"success": true} HQServer-->>Client: 200 OKComments Outside Diff (2)
radioshaq/radioshaq/radio/hackrf_tx_compat.py, line 135-157 (link)_stream_via_callbackcallsstop_tx()immediately afterstart_tx(), aborting transmissionOn HackRF drivers that implement
start_tx(callback)asynchronously (the standard libhackrf pattern where the callback is fired from a USB transfer thread), callingdev.stop_tx()right afterdev.start_tx(_tx_cb)terminates the USB streaming before the callback has a chance to fill any buffers. The result is that effectively zero data is transmitted.The test fake makes this look correct because its
start_txruns the callback synchronously to completion before returning. A real or semi-realistic async driver would not.This path is reached when
libhackrfisNoneandstart_txtakes a parameter, i.e. it is the fallback for environments without the ctypes bindings. It should wait for transmission to complete, similar to_stream_via_direct_libhackrf:Note also that the current signature
_stream_via_callback(dev, payload)is missing theduration_secparameter that the fix above requires;stream_hackrf_iq_byteswould need to pass it through.radioshaq/radioshaq/remote_receiver/backends/hackrf_backend.py, line 222-270 (link)Lambda closures capture mutable
selfattributes — stale demodulator after retuneEach
run_in_executorlambda capturesself._nfm,self._am,self._ssb, orself._cwat call time by closing overself. Ifset_frequencyorconfigureis called concurrently (resetting the demodulators toNone) while the executor is still running the lambda, the lambda holds a reference to the old demodulator object (that's fine), but the next iteration may construct a new demodulator before the old executor task finishes, causing two demodulators to operate on interleaved sample windows.The same late-binding issue:
lambda: float_to_pcm16(self._nfm.demod(s))capturesself._nfmlazily — ifself._nfmis reset toNonebetween when the lambda is created and when the executor thread runs it, the lambda will raiseAttributeError: 'NoneType' object has no attribute 'demod'.Capture the demodulator instance in a local variable before submitting to the executor:
The same pattern applies to all four demod branches.
Last reviewed commit: 724df38