/[escript]/trunk/doc/user/execute.tex
ViewVC logotype

Annotation of /trunk/doc/user/execute.tex

Parent Directory Parent Directory | Revision Log Revision Log


Revision 2736 - (hide annotations)
Mon Nov 2 04:15:27 2009 UTC (9 years, 9 months ago) by caltinay
File MIME type: application/x-tex
File size: 10422 byte(s)
Fixed a couple of typos.

1 gross 2316 \chapter{Execution of an {\it escript} Script}
2     \label{EXECUTION}
3    
4     \section{Overview}
5 jfenwick 2331 A typical way of starting your {\it escript} script \file{myscript.py} is with the \program{escript} command\footnote{The \program{escript} launcher is not supported under \WINDOWS yet.}:
6 gross 2316 \begin{verbatim}
7     escript myscript.py
8     \end{verbatim}
9 jfenwick 2331 as already shown in section~\ref{FirstSteps}\footnote{For this discussion, it is assumed that \program{escript} is included in your \env{PATH} environment. See installation guide for details.}
10     . In some cases
11     it can be useful to work interactively e.g. when debugging a script, with the command
12 gross 2316 \begin{verbatim}
13     escript -i myscript.py
14     \end{verbatim}
15 jfenwick 2331 This will execute \var{myscript.py} and when it completes (or an error occurs), a \PYTHON prompt will be provided.
16     To leave the prompt press \kbd{Control-d}.
17 gross 2316
18     To start
19     \program{escript} using four threads (eg. if you use a multi-core processor) you can use
20     \begin{verbatim}
21     escript -t 4 myscript.py
22     \end{verbatim}
23 jfenwick 2331 This will require {\it escript} to be compiled for \OPENMP\cite{OPENMP}.
24 gross 2316
25 jfenwick 2331 To start \program{escript} using \MPI\cite{MPI} with $8$ processes you use
26 gross 2316 \begin{verbatim}
27 jfenwick 2331 escript -p 8 myscript.py
28 gross 2316 \end{verbatim}
29 gross 2375 If the processors which are used are multi--core processors or multi--processor shared memory architectures you can use threading in addition to \MPI. For instance to run $8$ \MPI processes with using $4$ threads each, you use the command
30 gross 2316 \begin{verbatim}
31 jfenwick 2331 escript -p 8 -t 4 myscript.py
32 gross 2316 \end{verbatim}
33 jfenwick 2331 In the case of a super computer or a cluster, you may wish to distribute the workload over a number of nodes\footnote{For simplicity, we will use the term node to refer to either a node in a super computer or an individual machine in a cluster}.
34     For example, to use $8$ nodes, with $4$ \MPI processes per node, write
35 gross 2316 \begin{verbatim}
36 jfenwick 2331 escript -n 8 -p 4 myscript.py
37 gross 2316 \end{verbatim}
38 jfenwick 2331 Since threading has some performance advantages over processes, you may specify a number of threads as well.
39     \begin{verbatim}
40     escript -n 8 -p 4 -t 2 myscript.py
41     \end{verbatim}
42     This runs the script on $8$ nodes, with $4$ processes per node and $2$ threads per process.
43 gross 2316
44     \section{Options}
45     The general form of the \program{escript} launcher is as follows:
46    
47 jfenwick 2345 %%%%
48     % If you are thinking about changing this please remember to update the man page as well
49     %%%%
50    
51 gross 2316 \program{escript}
52     \optional{\programopt{-n \var{nn}}}
53 jfenwick 2331 \optional{\programopt{-p \var{np}}}
54 gross 2316 \optional{\programopt{-t \var{nt}}}
55     \optional{\programopt{-f \var{hostfile}}}
56     \optional{\programopt{-x}}
57     \optional{\programopt{-V}}
58     \optional{\programopt{-e}}
59     \optional{\programopt{-h}}
60     \optional{\programopt{-v}}
61     \optional{\programopt{-o}}
62     \optional{\programopt{-c}}
63     \optional{\programopt{-i}}
64 jfenwick 2343 \optional{\programopt{-b}}
65 gross 2316 \optional{\var{file}}
66     \optional{\var{ARGS}}
67    
68 jfenwick 2331 where \var{file} is the name of a script, \var{ARGS} are arguments for the script.
69     The \program{escript} program will import your current environment variables.
70     If no \var{file} is given, then you will be given a \PYTHON prompt (see \programopt{-i} for restrictions).
71    
72 gross 2316 The options are used as follows:
73     \begin{itemize}
74    
75     \item[\programopt{-n} \var{nn}] the number of compute nodes \var{nn} to be used. The total number of process being used is
76 gross 2363 $\var{nn} \cdot \var{ns}$. This option overwrites the value of the \env{ESCRIPT_NUM_NODES} environment variable.
77     If a hostfile is given, the number of nodes needs to match the number hosts given in the host file.
78     If $\var{nn}>1$ but {\it escript} is not compiled for \MPI a warning is printed but execution is continued with $\var{nn}=1$. If \programopt{-n} is not set the
79 gross 2350 number of hosts in the host file is used. The default value is 1.
80 gross 2316
81 jfenwick 2345 \item[\programopt{-p} \var{np}] the number of MPI processes per node. The total number of processes to be used is
82 gross 2370 $\var{nn} \cdot \var{np}$. This option overwrites the value of the \env{ESCRIPT_NUM_PROCS} environment variable. If $\var{np}>1$ but {\it escript} is not compiled for \MPI a warning is printed but execution is continued with $\var{np}=1$. The default value is 1.
83 gross 2316
84 jfenwick 2331 \item[\programopt{-t} \var{nt}] the number of threads used per processes.
85     The option overwrites the value of the \env{ESCRIPT_NUM_THREADS} environment variable.
86     If $\var{nt}>1$ but {\it escript} is not compiled for \OPENMP a warning is printed but execution is continued with $\var{nt}=1$. The default value is 1.
87 gross 2316
88 gross 2350 \item[\programopt{-f} \var{hostfile}] the name of a file with a list of host names. Some systems require to specify the addresses or names of the compute nodes where \MPI process should be spawned. The list of addresses or names of the compute nodes is listed in the file with the name \var{hostfile}. If \programopt{-n} is set the
89     the number of different
90 gross 2375 hosts defined in \var{hostfile} must be equal to the number of requested compute nodes \var{nn}. The option overwrites the value of the \env{ESCRIPT_HOSTFILE} environment variable. By default value no host file is used.
91 gross 2316 \item[\programopt{-c}] prints the information about the settings used to compile {\it escript} and stops execution..
92     \item[\programopt{-V}] prints the version of {\it escript} and stops execution.
93     \item[\programopt{-h}] prints a help message and stops execution.
94 caltinay 2736 \item[\programopt{-i}] executes the script \var{file} and switches to interactive mode after the execution is finished or an exception has occurred. This option is useful for debugging a script. The option cannot be used if more than one process ($\var{nn} \cdot \var{np}>1$) is used.
95 jfenwick 2343 \item[\programopt{-b}] do not invoke python. This is used to run non-python programs.
96 gross 2316
97 gross 2375 \item[\programopt{-e}] shows additional environment variables and commands used during \program{escript} execution. This option is useful if users wish to execute scripts without using the \program{escript} command.
98 gross 2316
99 gross 2355 \item[\programopt{-o}] switches on the redirection of output of processors with \MPI rank greater than zero to the files \file{stdout_\var{r}.out} and \file{stderr_\var{r}.out} where \var{r} is the rank of the processor. The option overwrites the value of the \env{ESCRIPT_STDFILES} environment variable
100 gross 2316
101 jfenwick 2331 % \item[\programopt{-x}] interpret \var{file} as an \esysxml \footnote{{\it esysxml} has not been released yet.} task.
102 gross 2375 % This option is still experimental.
103 gross 2316
104 gross 2375 \item[\programopt{-v}] prints some diagnostic information.
105 gross 2316 \end{itemize}
106 gross 2370 \subsection{Notes}
107     \begin{itemize}
108     \item Make sure that \program{mpiexec} is in your \env{PATH}.
109     \item For MPICH and INTELMPI and for the case a hostfile is present
110     \program{escript} will start the \program{mpd} demon before execution.
111     \end{itemize}
112 gross 2316
113     \section{Input and Output}
114 caltinay 2736 When \MPI is used on more than one process ($\var{nn} \cdot \var{np} >1$) no input from the standard input is accepted. Standard output on any process other than the master process (\var{rank}=0) will not be available.
115 gross 2375 Error output from any processor will be redirected to the node where \program{escript} has been invoked.
116 gross 2355 If the \programopt{-o} or \env{ESCRIPT_STDFILES} is set\footnote{That is, it has a non-empty value.}, then the standard and error output from any process other than the master process will be written to files of the names \file{stdout_\var{r}.out} and \file{stderr_\var{r}.out} (where
117 jfenwick 2331 \var{r} is the rank of the process).
118 gross 2316
119 jfenwick 2331 If files are created or read by individual \MPI processes with information local to the process (e.g in the \function{dump} function) and more than one process is used ($\var{nn} \cdot \var{np} >1$), the \MPI process rank is appended to the file names.
120     This will avoid problems if processes are using a shared file system.
121 caltinay 2736 Files which collect data which are global for all \MPI processors will be created by the process with \MPI rank 0 only.
122 jfenwick 2331 Users should keep in mind that if the file system is not shared, then a file containing global information
123 gross 2316 which is read by all processors needs to be copied to the local file system before \program{escript} is invoked.
124    
125    
126 gross 2375 \section{Hints for MPI Programming}
127     In general a script based on the \escript module does not require modifications when running under \MPI. However, one needs to be careful if other modules are used.
128    
129     When \MPI is used on more than one process ($\var{nn} \cdot \var{np} >1$) the user needs to keep in mind that several copies of his script are executed at the same time
130     \footnote{In case of OpenMP only one copy is running but \escript temporarily spawns threads.} while data exchange is performed through the \escript module. At any time,
131     \escript assumes that an argument of the type \var{int}, \var{float}, \var{str}
132 gross 2484 and \numpy has an identical value across all processors. All
133 gross 2375 values of these types returned by \escript have the same value on all processors.
134 gross 2420 If values produced by other modules are used as arguments the user has to make sure that the argument values are identical on all processors. For instance, the usage of a random number generator to create argument values bears the risk that the value may depend on the processor.
135 gross 2375
136 caltinay 2736 Special attention is required when using files on more than one processor as
137     several processors access the file at the same time. Opening a file for
138 gross 2420 reading is safe, however the user has to make sure that the variables which are
139     set from reading data from files are identical on all processors.
140    
141     When writing data to a file it is important that only one processor is writing to
142     the file at any time. As all values in \escript are global it is sufficient
143     to write values on the processor with \MPI rank $0$ only.
144     The \class{FileWriter} class provides a convenient way to write global data
145     to a simple file. The following script writes to the file
146 gross 2375 \var{'test.txt'} on the processor with id $0$ only:
147     \begin{python}
148     from esys.escript import *
149 gross 2420 f = FileWriter('test.txt')
150 gross 2375 f.write('test message')
151     f.close()
152     \end{python}
153 caltinay 2736 It is highly recommendable to use this class rather than the built-in \function{open}
154 gross 2420 function as it will guarantee a script which will run in single processor mode as well as under \MPI.
155 gross 2375
156 caltinay 2736 If there is the situation that one of the processors is throwing an exception,
157 gross 2420 for instance as opening a file for writing fails, the other processors
158     are not automatically made aware of this as \MPI
159 gross 2375 is not handling exceptions. However, \MPI will terminate the other processes but
160 gross 2420 may not inform the user of the reason in an obvious way. The user needs to inspect the
161 gross 2484 error output files to identify the exception.

  ViewVC Help
Powered by ViewVC 1.1.26