1 |
\chapter{Execution of an {\it escript} Script} |
2 |
\label{EXECUTION} |
3 |
|
4 |
\section{Overview} |
5 |
A typical way of starting your {\it escript} script \file{myscript.py} is with the \program{escript} command\footnote{The \program{escript} launcher is not supported under \WINDOWS yet.}: |
6 |
\begin{verbatim} |
7 |
escript myscript.py |
8 |
\end{verbatim} |
9 |
as already shown in section~\ref{FirstSteps}\footnote{For this discussion, it is assumed that \program{escript} is included in your \env{PATH} environment. See installation guide for details.} |
10 |
. In some cases |
11 |
it can be useful to work interactively e.g. when debugging a script, with the command |
12 |
\begin{verbatim} |
13 |
escript -i myscript.py |
14 |
\end{verbatim} |
15 |
This will execute \var{myscript.py} and when it completes (or an error occurs), a \PYTHON prompt will be provided. |
16 |
To leave the prompt press \kbd{Control-d}. |
17 |
|
18 |
To start |
19 |
\program{escript} using four threads (eg. if you use a multi-core processor) you can use |
20 |
\begin{verbatim} |
21 |
escript -t 4 myscript.py |
22 |
\end{verbatim} |
23 |
This will require {\it escript} to be compiled for \OPENMP\cite{OPENMP}. |
24 |
|
25 |
To start \program{escript} using \MPI\cite{MPI} with $8$ processes you use |
26 |
\begin{verbatim} |
27 |
escript -p 8 myscript.py |
28 |
\end{verbatim} |
29 |
If the processors which are used are multi--core processors or multi--processor shared memory architectures you can use threading in addition to \MPI. For instance to run $8$ \MPI processes with using $4$ threads each, you use the command |
30 |
\begin{verbatim} |
31 |
escript -p 8 -t 4 myscript.py |
32 |
\end{verbatim} |
33 |
In the case of a super computer or a cluster, you may wish to distribute the workload over a number of nodes\footnote{For simplicity, we will use the term node to refer to either a node in a super computer or an individual machine in a cluster}. |
34 |
For example, to use $8$ nodes, with $4$ \MPI processes per node, write |
35 |
\begin{verbatim} |
36 |
escript -n 8 -p 4 myscript.py |
37 |
\end{verbatim} |
38 |
Since threading has some performance advantages over processes, you may specify a number of threads as well. |
39 |
\begin{verbatim} |
40 |
escript -n 8 -p 4 -t 2 myscript.py |
41 |
\end{verbatim} |
42 |
This runs the script on $8$ nodes, with $4$ processes per node and $2$ threads per process. |
43 |
|
44 |
\section{Options} |
45 |
The general form of the \program{escript} launcher is as follows: |
46 |
|
47 |
%%%% |
48 |
% If you are thinking about changing this please remember to update the man page as well |
49 |
%%%% |
50 |
|
51 |
\program{escript} |
52 |
\optional{\programopt{-n \var{nn}}} |
53 |
\optional{\programopt{-p \var{np}}} |
54 |
\optional{\programopt{-t \var{nt}}} |
55 |
\optional{\programopt{-f \var{hostfile}}} |
56 |
\optional{\programopt{-x}} |
57 |
\optional{\programopt{-V}} |
58 |
\optional{\programopt{-e}} |
59 |
\optional{\programopt{-h}} |
60 |
\optional{\programopt{-v}} |
61 |
\optional{\programopt{-o}} |
62 |
\optional{\programopt{-c}} |
63 |
\optional{\programopt{-i}} |
64 |
\optional{\programopt{-b}} |
65 |
\optional{\var{file}} |
66 |
\optional{\var{ARGS}} |
67 |
|
68 |
where \var{file} is the name of a script, \var{ARGS} are arguments for the script. |
69 |
The \program{escript} program will import your current environment variables. |
70 |
If no \var{file} is given, then you will be given a \PYTHON prompt (see \programopt{-i} for restrictions). |
71 |
|
72 |
The options are used as follows: |
73 |
\begin{itemize} |
74 |
|
75 |
\item[\programopt{-n} \var{nn}] the number of compute nodes \var{nn} to be used. The total number of process being used is |
76 |
$\var{nn} \cdot \var{ns}$. This option overwrites the value of the \env{ESCRIPT_NUM_NODES} environment variable. |
77 |
If a hostfile is given, the number of nodes needs to match the number hosts given in the host file. |
78 |
If $\var{nn}>1$ but {\it escript} is not compiled for \MPI a warning is printed but execution is continued with $\var{nn}=1$. If \programopt{-n} is not set the |
79 |
number of hosts in the host file is used. The default value is 1. |
80 |
|
81 |
\item[\programopt{-p} \var{np}] the number of MPI processes per node. The total number of processes to be used is |
82 |
$\var{nn} \cdot \var{np}$. This option overwrites the value of the \env{ESCRIPT_NUM_PROCS} environment variable. If $\var{np}>1$ but {\it escript} is not compiled for \MPI a warning is printed but execution is continued with $\var{np}=1$. The default value is 1. |
83 |
|
84 |
\item[\programopt{-t} \var{nt}] the number of threads used per processes. |
85 |
The option overwrites the value of the \env{ESCRIPT_NUM_THREADS} environment variable. |
86 |
If $\var{nt}>1$ but {\it escript} is not compiled for \OPENMP a warning is printed but execution is continued with $\var{nt}=1$. The default value is 1. |
87 |
|
88 |
\item[\programopt{-f} \var{hostfile}] the name of a file with a list of host names. Some systems require to specify the addresses or names of the compute nodes where \MPI process should be spawned. The list of addresses or names of the compute nodes is listed in the file with the name \var{hostfile}. If \programopt{-n} is set the |
89 |
the number of different |
90 |
hosts defined in \var{hostfile} must be equal to the number of requested compute nodes \var{nn}. The option overwrites the value of the \env{ESCRIPT_HOSTFILE} environment variable. By default value no host file is used. |
91 |
\item[\programopt{-c}] prints the information about the settings used to compile {\it escript} and stops execution.. |
92 |
\item[\programopt{-V}] prints the version of {\it escript} and stops execution. |
93 |
\item[\programopt{-h}] prints a help message and stops execution. |
94 |
\item[\programopt{-i}] executes the script \var{file} and switches to interactive mode after the execution is finished or an exception has occurred. This option is useful for debugging a script. The option cannot be used if more than one process ($\var{nn} \cdot \var{np}>1$) is used. |
95 |
\item[\programopt{-b}] do not invoke python. This is used to run non-python programs. |
96 |
|
97 |
\item[\programopt{-e}] shows additional environment variables and commands used during \program{escript} execution. This option is useful if users wish to execute scripts without using the \program{escript} command. |
98 |
|
99 |
\item[\programopt{-o}] switches on the redirection of output of processors with \MPI rank greater than zero to the files \file{stdout_\var{r}.out} and \file{stderr_\var{r}.out} where \var{r} is the rank of the processor. The option overwrites the value of the \env{ESCRIPT_STDFILES} environment variable |
100 |
|
101 |
% \item[\programopt{-x}] interpret \var{file} as an \esysxml \footnote{{\it esysxml} has not been released yet.} task. |
102 |
% This option is still experimental. |
103 |
|
104 |
\item[\programopt{-v}] prints some diagnostic information. |
105 |
\end{itemize} |
106 |
\subsection{Notes} |
107 |
\begin{itemize} |
108 |
\item Make sure that \program{mpiexec} is in your \env{PATH}. |
109 |
\item For MPICH and INTELMPI and for the case a hostfile is present |
110 |
\program{escript} will start the \program{mpd} demon before execution. |
111 |
\end{itemize} |
112 |
|
113 |
\section{Input and Output} |
114 |
When \MPI is used on more than one process ($\var{nn} \cdot \var{np} >1$) no input from the standard input is accepted. Standard output on any process other than the master process (\var{rank}=0) will not be available. |
115 |
Error output from any processor will be redirected to the node where \program{escript} has been invoked. |
116 |
If the \programopt{-o} or \env{ESCRIPT_STDFILES} is set\footnote{That is, it has a non-empty value.}, then the standard and error output from any process other than the master process will be written to files of the names \file{stdout_\var{r}.out} and \file{stderr_\var{r}.out} (where |
117 |
\var{r} is the rank of the process). |
118 |
|
119 |
If files are created or read by individual \MPI processes with information local to the process (e.g in the \function{dump} function) and more than one process is used ($\var{nn} \cdot \var{np} >1$), the \MPI process rank is appended to the file names. |
120 |
This will avoid problems if processes are using a shared file system. |
121 |
Files which collect data which are global for all \MPI processors will be created by the process with \MPI rank 0 only. |
122 |
Users should keep in mind that if the file system is not shared, then a file containing global information |
123 |
which is read by all processors needs to be copied to the local file system before \program{escript} is invoked. |
124 |
|
125 |
|
126 |
\section{Hints for MPI Programming} |
127 |
In general a script based on the \escript module does not require modifications when running under \MPI. However, one needs to be careful if other modules are used. |
128 |
|
129 |
When \MPI is used on more than one process ($\var{nn} \cdot \var{np} >1$) the user needs to keep in mind that several copies of his script are executed at the same time |
130 |
\footnote{In case of OpenMP only one copy is running but \escript temporarily spawns threads.} while data exchange is performed through the \escript module. |
131 |
|
132 |
This has three main implications: |
133 |
\begin{enumerate} |
134 |
\item most arguments (\var{Data} excluded) should the same values on all processors. eg \var{int}, \var{float}, \var{str} |
135 |
and \numpy parameters. |
136 |
\item the same operations will be called on processors. |
137 |
\item different processors may store different amounts of information. |
138 |
\end{enumerate} |
139 |
|
140 |
With a few exceptions\footnote{getTupleForDataPoint}, values of types \var{int}, \var{float}, \var{str} |
141 |
and \numpy returned by \escript will have the same value on all processors. |
142 |
If values produced by other modules are used as arguments, the user has to make sure that the argument values are identical |
143 |
on all processors. For instance, the usage of a random number generator to create argument values bears the risk that |
144 |
the value may depend on the processor. |
145 |
|
146 |
Some operations in \escript require communication with all processors executing the job. |
147 |
It is not always obvious which operations these are. |
148 |
For example, \var{Lsup} returns the largest value on all processors. |
149 |
\var{getValue} on \var{Locator} may refer to a value stored on another processor. |
150 |
For this reason it is better if scripts do not have conditional operations (which manipulate data) based on which processor the script is on. |
151 |
Crashing or hanging scripts can be an indication that this has happened. |
152 |
|
153 |
It is not always possible to divide data evenly amongst processors. |
154 |
In fact some processors might not have any data at all. |
155 |
Try to avoid writing scripts which iterate over data points, |
156 |
instead try to describe the operation you wish to perform as a whole. |
157 |
|
158 |
Special attention is required when using files on more than one processor as |
159 |
several processors access the file at the same time. Opening a file for |
160 |
reading is safe, however the user has to make sure that the variables which are |
161 |
set from reading data from files are identical on all processors. |
162 |
|
163 |
When writing data to a file it is important that only one processor is writing to |
164 |
the file at any time. As all values in \escript are global it is sufficient |
165 |
to write values on the processor with \MPI rank $0$ only. |
166 |
The \class{FileWriter} class provides a convenient way to write global data |
167 |
to a simple file. The following script writes to the file |
168 |
\var{'test.txt'} on the processor with id $0$ only: |
169 |
\begin{python} |
170 |
from esys.escript import * |
171 |
f = FileWriter('test.txt') |
172 |
f.write('test message') |
173 |
f.close() |
174 |
\end{python} |
175 |
We strongly recommend using this class rather than the built-in \function{open} |
176 |
function as it will guarantee a script which will run in single processor mode as well as under \MPI. |
177 |
|
178 |
If there is the situation that one of the processors is throwing an exception, |
179 |
for instance as opening a file for writing fails, the other processors |
180 |
are not automatically made aware of this since \MPI |
181 |
dioes not handle exceptions. |
182 |
However, \MPI will terminate the other processes but |
183 |
may not inform the user of the reason in an obvious way. The user needs to inspect the |
184 |
error output files to identify the exception. |