Distributed Conduit
This distributed conduit uses MPI to distribute sample evaluation among n workers. Each worker consists of k MPI ranks, where k is a configurable parameter. Communication among workers is realized via MPI messages.
This model is ideal for when your computational model can be directly linked with Korali and/or expects an MPI communicator itself.
For an example on how to create a MPI/Python Korali application, see: MPI/Python Example). For an example on how to create a MPI/C++ Korali application, see: MPI/C++ Example). For more information, see Parallel Execution.
Usage
k["Conduit"]["Type"] = "Distributed"
Configuration
These are settings required by this module.
- Ranks Per Worker
Usage: e[“Conduit”][“Ranks Per Worker”] = integer
Description: Specifies the number of MPI ranks per Korali worker.
- Engine Ranks
Usage: e[“Conduit”][“Engine Ranks”] = integer
Description: Specifies the number of MPI ranks for the Korali engine.
Default Configuration
These following configuration will be assigned by default. Any settings defined by the user will override the given settings specified in these defaults.
{ "Engine Ranks": 1, "Ranks Per Worker": 1 }