Skip to content

Add example of Multi-GPU simulation #355

@rkierulf

Description

@rkierulf

Feature Request

In some cases, it may be beneficial to run MRI simulations on multiple GPUs if the problem is too large for single-GPU memory. KomaMRI does not have built-in support for this, but due to the independent spin property, it would not be too hard for a programmer to manually split the simulation into parts, run each part on a different GPU, then add the results at the end.

Based on this section of the CUDA.jl documentation, here is one example I think would work without needing to update code within the package (NOTE: I have not tested this!):

using Distributed, CUDA

addprocs(length(devices()))
@everywhere using CUDA, KomaMRI

sys = Scanner()
obj = Phantom()
parts = kfoldperm(length(obj), length(devices))
seq = read_seq("SequenceFileName.seq")
signal_arr = []

asyncmap((zip(workers(), devices()))) do (p, d)
    remotecall_wait(p) do
        device!(d)
        push!(signal_arr, simulate(@view(obj[p]), seq, sys))
    end
end

signal = reduce(.+, signal_arr)

If this does work, it would be helpful to add to the examples folder of the repository and package documentation.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions