Note
Go to the end to download the full example code.
Use a bash script
from pathlib import Path
import time
import subprocess
from remote_run.run import (
ExecutionContext,
SlurmSchedulingEngine,
GuixRunner,
remote,
is_job_finished,
)
Define an execution context with a scheduling engine. For slurm you can pass slurm parameters directly here.
execution_context = ExecutionContext(
machine="shpc0003.ost.ch",
working_directory=Path("/cluster/raid/home/reza.housseini"),
runner=GuixRunner(channels=Path("channels.scm").read_text()),
scheduling_engine=SlurmSchedulingEngine(
job_name="test_remote_run",
mail_type="ALL",
mail_user="reza.housseini@ost.ch",
),
)
decorate your functions you want to run in the specified execution context
@remote(execution_context)
def execute_bash_commands(message):
return subprocess.run(
["echo", message, ";", "hostname"], capture_output=True
).stdout.decode("utf-8")
this call will run on the remote machine specified in execution_context but due to the asynchronous nature of scheduling engines this will not return the result, instead you get the job id and a function to retrieve the result later.
job_id, result_func = execute_bash_commands("hello")
now we wait for the remote execution to finish before retrieving the result
time.sleep(20)
we should check if the job id has finished before retrieving the result
if is_job_finished(execution_context, job_id):
assert result_func() == "hello\nshpc0003"