py-mpi4py
This package provides Python bindings for the Message Passing Interface (MPI) standard. It is implemented on top of the MPI-1/MPI-2 specification and exposes an API which grounds on the standard MPI-2 C++ bindings.
Slurm Script
#!/bin/bash #SBATCH --job-name=mpi4py-test # create a name for your job #SBATCH -o mpi4py_job.o%j # Name of stdout output file (%j expands to jobId) #SBATCH -p CUIQue # Queue name #SBATCH --nodes=4 # node count (max 8 as of now) #SBATCH --ntasks=16 # total number of tasks #SBATCH --cpus-per-task=1 # cpu-cores per task #SBATCH --time=00:10:00 # total run time limit (HH:MM:SS) module load py-setuptools/58.2.0 module load py-mpi4py/3.1.2 module load python/3.8 mpirun python3.8 hello_mpi.py
hello_mpi.py Code
# hello_mpi.py: # usage: python hello_mpi.py from mpi4py import MPI import sys def print_hello(rank, size, name): msg = "Hello World! I am process {0} of {1} on {2}.\n" sys.stdout.write(msg.format(rank, size, name)) if __name__ == "__main__": size = MPI.COMM_WORLD.Get_size() rank = MPI.COMM_WORLD.Get_rank() name = MPI.Get_processor_name() print_hello(rank, size, name)
Submit the job to the scheduler as
sbatch pympiHelloScript.slurm
For managing your “job” , refer to this guide.
For more on parallel python, refer to their official documentation
https://mpi4py.readthedocs.io/en/stable/