MultiThreading Architecture : Brainstorming #41
Replies: 3 comments 9 replies
-
|
I only moved the main AppCode into a worker because I was going to Atomic.wait on SharedBuffer values which requires WorkerThread execution, but technically the Job Runner Zero could be made into a more JS-compatible mode. |
Beta Was this translation helpful? Give feedback.
-
|
Hello, Thanks for looking into this! I have to say that I'm pretty noobish when it comes to JavaScript, I have never worked with web workers, so excuse me if I make stupid suggestions.
Would it be possible to copy paste the (and a bit of code while destroying threads) and then call
This would severely limit the things you can do with the library, so would not have my preference.
This would also limit options a lot (and I wouldn't exactly know what 'common collision behavior' would be).
I think this would have my preference. Most of the time you can get all the information you need from the body/bodies that you're colliding with and the data from these bodies comes from the WASM layer. If I quickly scan the JS worker documentation then it sounds like you can pass in parameters with each worker, so if you need some global context (like which body ID is a conveyor belt) then could you pass that in as a param?
Since you don't know ahead of time if a job is going to do a collision callback, this would mean you need to run all collision jobs on the main thread (including the broadphase job as that can also trigger collision callbacks), so essentially you would lose all parallelism for about 50% of the work in |
Beta Was this translation helpful? Give feedback.
-
|
Tried to do the trivial implementation on https://github.com/PhoenixIllusion/JoltPhysics.js/blob/feature/web-worker-threadpool/ Currently I've run into some issues with thread_local logic in the system is messing things up. I think I'd need to get more familiar with the inner workings to get through debugging all of the multi-threaded access. Kind of frustrating, and I am not use to CPP in wasm for the multi-threading. I learned that the WASM mulit-threading requires some stack setup since each thread will have it's own thread, and that it is not easy to grab the workers after they spawn from inside WASM. I gotten as far as having a bunch of thread-access asserts trigger, but during debugging the Profiler was also throwing issues from thread_local references. Mainly lots of out-of-bounds exceptions for now. |
Beta Was this translation helpful? Give feedback.

Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I looked into the existing discussion about multithreading and the limitations that were run into (and reproduced) regarding Safari.
Approaching this as more of a custom JobPool solution with worker-api (not-pthread), I was able to confirm that at a raw SharedBufffer + Atomics level, Safari has no issues in general with Atomic store/load/add/sub/wait/notify. I was code to test on Safari a trivial test (non-JoltJS) case of launching a scheduler thread that then shared-mutex-signaled workers with shared-mem payloads and waited until it's atomic-controlled active-job-counter either changed or hit zero.
Current theoretical approach would be a JS-Worker-JobPool combined with collection of JobRunner classes.
The main Jolt.JS entry point I focused on was the Step() command and Job->Execute(). The physics Step logic appears to build up jobs and pass them onto the JobPool scheduler, which is then responsible for scheduling them on the threads and waiting until they all finish at the end.
The initial trivial logic was that a custom native C++ JobPool would be created and instantiate a collection of Job Runner Classes (say, numbered 0 to 4). It would do light checks on dependencies but otherwise just stuff the Job Runners full as long as they had unused job-slots.
Each Job Runner pointer is passed to the other worker threads on creation. The worker-thread's job is just "execute the Runner on-launch and wait. Sit on an internal pointer Job-slot. When the internal Job slot becomes non-null, execute that job, set the slot to null, atomic-subtract the ThreadPool job-runing count by 1, and sit on the Job-slot until non-null". Each worker thread doesn't even really need JS schema's besides the 1x emscripten command of "Job Runner class :: run( $ptrThis )"
The complicated in my planning logic was for any Job Runner with an assigned Job that has callbacks. The callbacks will work for any functions that exist in CPP, since those technically exist in the shared-memory buffer. The problem is JS-Interface callbacks. Those only exist on the original JS layer, in the JSContext they were created.
I see 4 possibly options for this:
My diagram has the Step() command run outside of the AppContext, because the execution flow would run:
a. On JobPool Thread - Step() starts
b. Thread Pool Runs
c. Schedule and block/wait on threads until all Jobs are done
Beta Was this translation helpful? Give feedback.
All reactions