Hi all,
I am developing a simulation tool for a physics problem that I am working on (using c++). Essentially, the computation consists of manipulation of pointer structures that correspond to different physical processes. The output is “events” in time and space, and needs to be stored on a tensor (array) with four dimensions.
Normally I would parallelise this using MPI and have one tensor for every process, thus storing N tensors. However, I now need to be able to store much larger tensors (to obtain adequate accuracy), so ideally, I would like to use a shared memory approach and only store a single tensor.
But the only techniques I am familiar with is openMP, which works fine for manipulating arrays. The question is, what is a good approach to shared memory parallelisation when one has individual processes that produce data, yet a common data structure to store the output…
I am not very well versed in parallel programming techniques so I would really appreciate your input.
Thanks
//
Johan
I am developing a simulation tool for a physics problem that I am working on (using c++). Essentially, the computation consists of manipulation of pointer structures that correspond to different physical processes. The output is “events” in time and space, and needs to be stored on a tensor (array) with four dimensions.
Normally I would parallelise this using MPI and have one tensor for every process, thus storing N tensors. However, I now need to be able to store much larger tensors (to obtain adequate accuracy), so ideally, I would like to use a shared memory approach and only store a single tensor.
But the only techniques I am familiar with is openMP, which works fine for manipulating arrays. The question is, what is a good approach to shared memory parallelisation when one has individual processes that produce data, yet a common data structure to store the output…
I am not very well versed in parallel programming techniques so I would really appreciate your input.
Thanks
//
Johan