/[escript]/branches/doubleplusgood/doc/inversion/CookGravity.tex
ViewVC logotype

Contents of /branches/doubleplusgood/doc/inversion/CookGravity.tex

Parent Directory Parent Directory | Revision Log Revision Log


Revision 4345 - (show annotations)
Fri Mar 29 07:09:41 2013 UTC (5 years, 11 months ago) by jfenwick
File MIME type: application/x-tex
File size: 38064 byte(s)
Spelling fixes
1 \chapter{Gravity Inversion}\label{Chp:cook:gravity inversion}
2
3 \section{Introduction}
4
5 In this part of the documentation we give an introduction on how to use the
6 \downunder module and the inversion driver functions to perform the inversion
7 of gravity and magnetic data.
8 The driver functions enable geologists and geophysicists to apply the
9 \downunder module quickly and in an easy way without requiring detailed
10 knowledge of the theory behind inversion or programming skills.
11 However, users who are interested in specializing or extending the inversion
12 capacity are referred to Part~\ref{part2} of this manual.
13 It is beyond the intention of this manual to give a detailed introduction to
14 geophysical inversion, in particular to the appropriate preprocessing of data
15 sets.
16
17 The \downunder module described here is designed to calculate estimations for
18 the 3-D distribution of density and/or susceptibility from 2-D gravity and
19 magnetic data measured in ground or airborne surveys.
20 This process is generally called inversion of geophysical data.
21 Following the standard assumption it is assumed that the data are measured as
22 perturbation of an expected gravity and/or magnetic field of the Earth.
23 In this context measured gravity and magnetic data are in fact describing
24 anomalies in the gravity and magnetic field.
25 As a consequence the inversion process provides corrections to an average
26 density (typically $2670 kg/m^3$) and susceptibility (typically $0$).
27 So in the following we will always assume that given data are anomalies and
28 therefore not in all cases explicitly use the terms gravity anomalies or
29 density corrections but just use the terms gravity and density.
30
31 In this chapter we will give a detailed introduction into usage of the driver
32 functions for inversion of gravity data.
33 In the following chapters~\ref{Chp:cook:magnetic inversion} and~\ref{Chp:cook:joint inversion}
34 we will discuss the inversion of magnetic data, and the joint inversion of
35 gravity and magnetic data using \downunder.
36 Note, that the principles introduced in this chapter apply to magnetic and
37 joint inversion so the presentation for these problem classes is kept short
38 and users interested in magnetic data only should still work through this
39 chapter on gravity data.
40
41 To run the examples discussed you need to have \escript (version 3.3.1 or
42 newer) installed on your computer.
43 Moreover, if you want to visualize the results you need to have access to a
44 data plotting software which is able to process \VTK input files, e.g.
45 \mayavi or \VisIt.
46 As \mayavi can be easily obtained and installed for most platforms the
47 tutorial includes commands to visualize output files using \mayavi .
48 However, it is pointed out that \VisIt is the preferred visualization tool for
49 \escript as it can deal with very large data sets more efficiently.
50
51 \begin{figure}
52 \centering
53 \includegraphics[width=0.7\textwidth]{QLDWestGravityDataPlot.png}
54 \caption{Gravity Anomaly Data in $mgal$ from Western Queensland, Australia
55 (file \examplefile{data/QLDWestGravity.nc}). Data obtained from Geoscience Australia.}
56 %\AZADEH{Tracey R, Bacchin M, \& Wynne P. 2008. In preparation. AAGD07: A new absolute gravity datum for Australian gravity and new standards for the Australian National Gravity Database. Exploration Geophysics.}}
57 \label{FIG:P1:GRAV:0}
58 \end{figure}
59
60 \begin{figure}
61 \centering%
62 \includegraphics[width=0.9\textwidth]{QLDGravContourMu10.png}
63 \caption{3-D contour plot of the density distribution obtained by inversion of
64 file \file{data/QLDWestGravity.nc} (with $\mu=10$).
65 Colours represent values of density where high values are represented by
66 blue and low values are represented by red.}
67 \label{FIG:P1:GRAV:1}
68 \end{figure}
69
70 \section{How does it work?}
71 The execution of the inversion is controlled by a script which, in essence, is
72 a text file and can be edited using any text editor.
73 The script contains a series of statements which are executed by an
74 interpreter which is an executable program reading the text file and executing
75 the statements line-by-line.
76 In the case of \downunder the interpreter is \Python.
77 In order to be able to process the statements in each line of the script
78 certain rules (called syntax) need to be obeyed.
79 There is a large number of online tutorials for \Python available\footnote{e.g.
80 \url{http://www.tutorialspoint.com/python} and \url{http://doc.pyschools.com}}.
81 We also refer to the \escript cook book \cite{ESCRIPTCOOKBOOK} and user's
82 guide \cite{ESCRIPT} which is in particular useful for users who like to dive
83 deeper into \downunder.
84 For this part of the manual no \Python knowledge is required but it is
85 recommended that users acquire some basic knowledge on \Python as they
86 progress in their work with \downunder.
87
88 The following script~\ref{code: gravity1}\footnote{The script is similar to
89 \examplefile{grav_netcdf.py} within the \escript example file directory.} is a
90 simple example to run an inversion for gravity data:
91
92 \begin{pyc}\label{code: gravity1}
93 \
94 \begin{python}
95 # Header:
96 from esys.downunder import *
97 from esys.weipa import *
98 from esys.escript import unitsSI as U
99
100 # Step 1: set up domain
101 dom=DomainBuilder()
102 dom.setVerticalExtents(depth=40.*U.km, air_layer=6.*U.km, num_cells=25)
103 dom.setFractionalPadding(pad_x=0.2, pad_y=0.2)
104 dom.fixDensityBelow(depth=40.*U.km)
105
106 # Step 2: read gravity data
107 source0=NetCdfData(NetCdfData.GRAVITY, 'GravitySmall.nc')
108 dom.addSource(source0)
109
110 # Step 3: set up inversion
111 inv=GravityInversion()
112 inv.setSolverTolerance(1e-4)
113 inv.setSolverMaxIterations(50)
114 inv.setup(dom)
115
116 # Step 4: run inversion
117 inv.getCostFunction().setTradeOffFactorsModels(10.)
118 rho = inv.run()
119
120 # Step 5: write reconstructed density to file
121 saveVTK("result.vtu", density=rho)
122 \end{python}
123 \end{pyc}
124 The result, in this case the density distribution, is written to an external
125 file for further processing. You can copy and paste the text of the script
126 into a file of any name, let's say for further reference we use the file
127 name \file{grav.py}.
128 It is recommended to use the extension \file{.py} to identify the file as a
129 \Python script.
130 We will discuss the statements of the script later in this chapter.
131
132 Obviously the inversion needs to be fed with some gravity data. You can find
133 example data from western Queensland, Australia in two resolutions in the
134 \escript example directory. In this case the data are loaded from the file
135 \examplefile{GravitySmall.nc} which is given in the \netcdf file format.
136 After you have copied this file into the directory in which you have saved the
137 script \file{grav.py} you can run the program using the command line
138 \begin{verbatim}
139 run-escript grav.py
140 \end{verbatim}
141 We are running \file{grav.py} through the \escript start-up command since
142 \escript is used as a back end for the inversion algorithm\footnote{Please see
143 the \escript user's guide~\cite{ESCRIPT} on how to run your script in parallel
144 using threading and/or MPI.}.
145 Obviously it is assumed that you have an installation of \escript available on
146 your computer, see \url{https://launchpad.net/escript-finley}.
147
148 After the execution has successfully completed you will find the result file
149 \file{result.vtu} in the directory from where you have started the execution
150 of the script.
151 The file has the \VTK format and can be imported easily into many
152 visualization tools.
153 One option is the \mayavi package which is available on most platforms.
154 You can invoke the visualization using the commands
155 \begin{verbatim}
156 mayavi2 -d result.vtu -m SurfaceMap
157 \end{verbatim}
158 from the command line.
159 Figure~\ref{FIG:P1:GRAV:1} shows the result of this inversion as a contour
160 plot\footnote{These plots were generated by \VisIt using the higher resolution
161 data.}, while the gravity anomaly data is shown in Figure~\ref{FIG:P1:GRAV:0}.
162 We will discuss data later in Section~\ref{SEC:P1:GRAV:DATA}.
163
164 Let us take a closer look at the script\footnote{In \Python lines starting
165 with `\#` are comments and are not processed further.}. Besides the header
166 section one can separate the script into five steps:
167 \begin{enumerate}
168 \item set up domain on which the inversion problem is solved
169 \item load the data
170 \item set-up the inversion problem
171 \item run the inversion
172 \item further processing of the result. Here we write the reconstructed
173 density distribution to a file.
174 \end{enumerate}
175 In the following we will discuss the steps of the scripts in more detail.
176 Before we do this it is pointed out that the header section, following
177 \Python conventions, makes all required packages available to access within
178 the script.
179 At this point we will not discuss this in more details but emphasize that the
180 header section is a vital part of the script.
181 It is is required in each \downunder inversion script and should not be
182 altered except if additional modules are needed.
183
184 \begin{figure}
185 \centering
186 \includegraphics[width=\textwidth]{dom2D.pdf}
187 \caption{2-D domain set-up for gravity inversion}
188 \label{FIG:P1:GRAV:2}
189 \end{figure}
190
191 \section{Creating the Inversion Domain}
192 First step in Script~\ref{code: gravity1} is the definition of the domain over
193 which the inversion is performed.
194 We think of the domain as a block with orthogonal, plain faces.
195 Figure~\ref{FIG:P1:GRAV:2} shows the set-up for a two-dimensional domain
196 (see also Figure~\ref{fig:domainBuilder} for 3-D).
197 The lateral coordinates along the surface of the Earth are denoted by $x$ and
198 $y$ (only $x$-direction is used in 2-D).
199 The $z$ direction defines the vertical direction where the part above the
200 surface has positive coordinates and the subsurface negative coordinates.
201 The height of the section above the surface, which is assumed to be filled
202 with air, needs to be set by the user.
203 The inversion assumes that the density in the section is known to be
204 zero\footnote{Always keeping in mind that these are not absolute values but
205 anomalies.}.
206 The density below the surface is unknown and is calculated through the
207 inversion. The user needs to specify the depth below the surface in which the
208 density is to be calculated.
209 The lateral extension of the domain is defined by the data sets fed into the
210 inversion.
211 It is chosen large enough to cover all data sets (in case more than one is
212 used). In order to reduce the impact of the boundary a padding zone around the
213 data sets can be introduced.
214
215 \begin{figure}
216 \centering
217 \includegraphics[width=\textwidth]{dom2DFEM.pdf}
218 \caption{Cell distribution and boundary conditions for a 2-D domain}
219 \label{FIG:P1:GRAV:3}
220 \end{figure}
221
222 The reconstruction of the gravity field from an estimated density distribution
223 is the key component of the inversion process.
224 To do this \downunder uses the finite element method (FEM).
225 We need to introduce a grid over the domain, see Figure~\ref{FIG:P1:GRAV:3}.
226 The number of vertical cells is set by the user while the number of horizontal
227 cells is derived from the grid spacing of the gravity data set(s).
228 It is assumed that gravity field data given are constant across a cell.
229 To be able to reconstruct the gravity field some assumptions on the values of
230 the gravity field on the domain boundary have to be made.
231 \downunder assumes that on all faces the lateral gravity field component
232 equals zero. No assumptions on the horizontal components are
233 made\footnote{It is assumed that the gravity potential equals zero on the top
234 and bottom surface, see Section~\ref{sec:forward gravity} for details}%
235 \footnote{Most inversion codes use Green's functions over an unbounded domain
236 to reconstruct the gravity field. This approach makes the assumption that the
237 gravity field (or potential) is converging to zero when moving away from the
238 region of interest. The boundary conditions used here are stronger in the
239 sense that the lateral gravity component is enforced to be zero in a defined
240 distance of the region of interest but weaker in the sense that no constraint
241 on the horizontal component is applied.}.
242
243 In script~\ref{code: gravity1} the statement
244 \begin{verbatim}
245 dom=DomainBuilder()
246 \end{verbatim}
247 creates something like a factory to build a domain.
248 We then define the features of the domain we would like to create:
249 \begin{verbatim}
250 dom.setVerticalExtents(depth=40.*U.km, air_layer=6.*U.km, num_cells=25)
251 dom.setFractionalPadding(pad_x=0.2, pad_y=0.2)
252 \end{verbatim}
253 Here we specify the depth of the domain to $40 km$, the thickness of the air
254 layer above the surface to $6 km$ and the number of vertical cells to $25$.
255 We also introduce a lateral padding of $20 \%$ of the expansion of the gravity
256 data on each side of the data and in both lateral directions.
257
258 In some cases it can be appropriate to assume that the density below a certain
259 depth is zero\footnote{As we are in fact calculating density corrections this
260 means that the density is assumed to be the average density.}.
261 The statement
262 \begin{verbatim}
263 dom.fixDensityBelow(depth=40.*U.km)
264 \end{verbatim}
265 introduces this constraint.
266 As in the case discussed here if the depth for zero density is not less than
267 the depth of the domain no constraint at depth is applied to the density.
268
269 \downunder uses the metre-kilogram-second based International System of Units
270 (SI)\footnote{see \url{http://en.wikipedia.org/wiki/International_System_of_Units}}\index{SI}.
271 So all values must be converted to appropriate units.
272 This does not apply to geographic coordinates which in \downunder are given in
273 fractional degrees (as a floating point number) to represent longitude and
274 latitude. In the script we have used the expression
275 \begin{verbatim}
276 depth=40.*U.km
277 \end{verbatim}
278 to define the depth of the domain to $40 km$.
279 The expression \verb|U.km| denotes the unit $km$ (kilometer) and ensures
280 appropriate conversion of the value $40$ into the base unit $m$ (meter).
281 It is recommended to add units to values (where present) in order to make sure
282 that the final values handed into \downunder is given with the appropriate
283 units.
284 The physical units module of \escript, which we have imported here under the
285 name \verb|U| in the script header, defines a large number of physical units
286 and constants, please see~\cite{ESCRIPT} and~\cite{ESCRIPTONLINE}.
287
288 \section{Loading Gravitational Data}\label{SEC:P1:GRAV:DATA}
289 In practice gravity acceleration is measured in various ways, for instance by
290 airborne surveys~\cite{Telford1990a}.
291 \downunder assumes that all data supplied as input are already appropriately
292 pre-processed. In particular, corrections for
293 \begin{itemize}
294 \item free-air, to remove effects from altitude above ground;
295 \item latitude, to remove effects from ellipsoidicity of the Earth;
296 \item terrain, to remove effects from topography
297 \end{itemize}
298 must have been applied to the data.
299 In general, data prepared in such a form are called Bouguer anomalies~\cite{Telford1990a}.
300
301 To load gravity data into \downunder the data are given on a plane parallel
302 to the surface of the Earth at a constant altitude, see
303 diagram~\ref{FIG:P1:GRAV:2}.
304 The data need to be defined over a rectangular grid covering a subregion of
305 the Earth surface.
306 The grid uses a geographic coordinate system with latitudes and longitudes
307 assumed to be given in the Clarke 1866 geodetic system.
308 Figure~\ref{FIG:P1:GRAV:0} shows an example of such a data set from the
309 western parts of Queensland, Australia.
310 The data set covers a rectangular region between $140^o$ and $141^o$ east
311 and between $20^o$ and $21^o$ south.
312 Notice that latitude varies between $-90^o$ to $90^o$ where negative signs
313 refer to places in the southern hemisphere and longitude varies between
314 $-180^o$ to $180^o$ where negative signs refer to places west of Greenwich.
315 The colour at a location represents the value of the vertical Bouguer gravity
316 anomaly at this point at the surface of the Earth.
317 Values in this data set range from $-160 \; mgal$ to about $500 \; mgal$\footnote{The unit
318 $mgal$ means milli $gal$ (galileo) with $1 \; gal = 0.01 \frac{m}{sec^2}$.}
319 over a $121 \times 121$ grid.
320
321 In general, a patch of gravity data needs to be defined over a plane
322 \verb|NX| $\times$ \verb|NY| where \verb|NX| and \verb|NY| define the number
323 of grid lines in the longitude (\verb|X|) and the latitude (\verb|Y|)
324 direction, respectively.
325 The grid is spanned from an origin with spacing \verb|DELTA_X| and
326 \verb|DELTA_Y| in the longitude and the latitude direction, respectively.
327 The gravity data for all grid points need to be given as an \verb|NX|
328 $\times$ \verb|NY| array.
329 If available, measurement errors can be associated with the gravity data.
330 The values are given as an \verb|NX| $\times$ \verb|NY| array matching the
331 shape of the gravity array.
332 Note that data need not be available on every single point of the grid, see
333 Section~\ref{SEC:P1:GRAV:REMARK:DATAHOLES} for more information on this.
334
335 Currently, two data file formats are supported, namely \emph{ER Mapper Raster}
336 \cite{ERMAPPER} files and \netcdf~\cite{NETCDF} files.
337 In the examples of this chapter we use \netcdf files and refer to
338 Section~\ref{SEC:P1:GRAV:REMARK:ERMAPPER} and
339 Section~\ref{sec:ref:DataSource:ERM} for more information on using ER Mapper
340 Raster files.
341 If you have data in any other format you have the option of writing a suitable
342 reader (for advanced users, see Chapter~\ref{Chp:ref:data sources}) or,
343 assuming you are able to read the data in \Python, refer to the example
344 script \examplefile{create_netcdf.py} which shows how to create a file
345 in the \netcdf file format~\cite{NETCDF} compatible with \downunder from
346 a data array.
347
348 In script~\ref{code: gravity1} we use the statement
349 \begin{verbatim}
350 source0=NetCdfData(NetCdfData.GRAVITY, 'GravitySmall.nc')
351 \end{verbatim}
352 to load the gravity data stored in \examplefile{GravitySmall.nc} in the
353 \netcdf format.
354 Within the script the data set is now available under the name \verb|source0|.
355 We need to link the data set to the \verb|DomainBuilder| using
356 \begin{verbatim}
357 dom.addSource(source0)
358 \end{verbatim}
359 At the time the domain for the inversion is built the \verb|DomainBuilder|
360 will use the information about origin, extent and spacing along with the other
361 options provided to build an appropriate domain.
362 As at this point a flat Earth is assumed geographic coordinates used to
363 represent data in the input file are mapped to a (local) Cartesian coordinate
364 system. This is achieved by projecting the geographic coordinates into the
365 \emph{Universal Transverse Mercator} (UTM) coordinate system\footnote{See e.g.
366 \url{http://en.wikipedia.org/wiki/Universal_Transverse_Mercator_coordinate_system}}.
367
368 There are a few optional arguments you can add when constructing a data source.
369 While Section~\ref{sec:ref:DataSource} has a detailed description of all
370 arguments it is worth noting a few.
371 Firstly, it is important to know that data slices are assumed to be at altitude
372 $0$ by default. This can be easily changed though:
373 \begin{verbatim}
374 source0=NetCdfData(NetCdfData.GRAVITY, 'GravitySmall.nc', altitude=2.5*U.km)
375 \end{verbatim}
376 Another important setting is the scale or unit of the measurements.
377 The default is dependent on the data type and for gravity anomalies a scale
378 of $\frac{\mu m}{sec^2}$ (or $0.1 \; mgal$) is assumed. For instance to
379 change the default scale to $mgal$ (which is $10^{-5} \frac{m}{sec^2}$),
380 you could use:
381 \begin{verbatim}
382 source0=NetCdfData(NetCdfData.GRAVITY, 'GravitySmall.nc', scale_factor=U.mgal)
383 \end{verbatim}
384 Finally, it is possible to specify measurement errors (i.e. uncertainties)
385 alongside the data.
386 Since these can never be zero, a value of $2$ units is used if nothing else
387 has been specified.
388 The error value is assumed to be given in the same units as the data so the
389 default value translates to an error of $0.2 \; mgal$.
390 There are two possibilities to specify the error, namely by providing a
391 constant value which is applied to all data points:
392 \begin{verbatim}
393 source0=NetCdfData(NetCdfData.GRAVITY, 'GravitySmall.nc', error=1.7)
394 \end{verbatim}
395 or, if the information is available in the same \netcdf file under the name
396 \verb|errors|, provide \downunder with the appropriate variable name:
397 \begin{verbatim}
398 source0=NetCdfData(NetCdfData.GRAVITY, 'GravitySmall.nc', error="errors")
399 \end{verbatim}
400
401
402 It is important to keep an eye on the complexity of the inversion.
403 A good measure is the total number of cells being used.
404 Assume we have given a data set on a $20 \times 20$ grid and we add lateral
405 padding of, say, $20 \%$ to each side of the data, the lateral number of cells
406 becomes $(20\cdot 1.4)\times (20\cdot 1.4)=1.4^2\cdot 20^2\approx 2\cdot 10^2=800$.
407 If we use $20$ cells in the vertical direction we end up with a total number
408 of $800 \times 20 = 16,000$ cells.
409 This size can be easily handled by a modern desktop PC.
410 If we increase the grid size of the data to $40 \times 40$ points and use $40$
411 cells in the vertical extent we get a total of $(2\cdot 40^2)\cdot 40=128,000$
412 cells, a problem size which is considerably larger but can still be handled by
413 a desktop computer.
414 Taking this one step further, if the amount of data is increased to
415 $200\times 200$ points and we use $200$ cells in the vertical extent the
416 domain will contain $16,000,000$ ($16$ million) cells.
417 This scenario requires a computer with enough memory and (a) fast processor(s)
418 to run the inversion.
419 This estimate of complexity growth applies to the case where the increase
420 of data grid size is driven by an increase of resolution where it is
421 recommended to increase the vertical resolution in synch with the lateral
422 resolution. Note that if more than one data set is used the target resolution
423 will be the resolution of the finest data set (see also
424 Section~\ref{SEC:P1:GRAV:REMARK:MULTIDATA}).
425 In other cases the expansion of the region of interest drives an increase of
426 data grid size and the increase of total number of cells is less dramatic as
427 the vertical number of cells can remain constant while keeping a balanced
428 resolution in vertical and lateral direction.
429
430 \section{Setting up the Inversion and Running it}
431 We are now at step three of script~\ref{code: gravity1} in which the actual
432 inversion is set up.
433 First we create an empty inversion under the name \verb|inv|:
434 \begin{verbatim}
435 inv=GravityInversion()
436 \end{verbatim}
437 As indicated by the name we can use \verb|inv| to perform an inversion of
438 gravity data\footnote{\verb|GravityInversion| is a driver with a simplified
439 interface which is provided for convenience. See Part~\ref{part2} for more
440 details on how to write inversion scripts with more general functionality, e.g.
441 constraints.}. The inversion is an iterative process which sequentially
442 calculates updates to the density distribution in an attempt to improve the
443 match of the gravity field produced by the density distribution with the data.
444 Termination of the iteration is controlled by the tolerance which is set by
445 the user:
446 \begin{verbatim}
447 inv.setSolverTolerance(1e-4)
448 \end{verbatim}
449 Here we set the tolerance to $10^{-4}$, i.e. the iteration is terminated if
450 the maximum density correction is less than or equal to $10^{-4}$ relative to
451 the maximum value of estimated density anomaly.
452 In case the iteration does not converge a maximum number of iteration steps is
453 set:
454 \begin{verbatim}
455 inv.setSolverMaxIterations(50)
456 \end{verbatim}
457 If the maximum number of iteration steps (here $50$) is reached the iteration
458 process is aborted and an error message is printed.
459 In this case you can try to rerun the inversion with a larger value for the
460 maximum number of iteration steps.
461 If even for a very large number of iteration steps no convergence is achieved,
462 it is very likely that the inversion has not been set up properly.
463
464 The statement
465 \begin{verbatim}
466 inv.setup(dom)
467 \end{verbatim}
468 links the inversion with the domain and the data.
469 At this step -- as we are solving a gravity inversion problem -- only
470 gravitational data attached to the domain builder \verb|dom| are considered.
471 Internally a cost function $J$ is created which is minimized during the
472 inversion iteration.
473 It is a combination of a measure of the data misfit of the gravity field from
474 the given density distribution and a measure of the smoothness of the density
475 distribution.
476 The latter is often called the regularization term.
477 By default the gradient of density is used as the regularization term, see
478 also Section~\ref{SEC:P1:GRAV:REMARK:REG}.
479 Obviously, the result of the inversion is sensitive to the weighting between
480 the misfit and the regularization.
481 This trade-off factor $\mu$ for the misfit function is set by the following
482 statement:
483 \begin{verbatim}
484 inv.getCostFunction().setTradeOffFactorsModels(0.1)
485 \end{verbatim}
486 Here we set $\mu=0.1$. The statement \verb|inv.setup| must appear in the
487 script before setting the trade-off factor.
488 A small value for the trade-off factor $\mu$ will give more emphasis to the
489 regularization component and create a smoother density distribution.
490 A large value of the trade-off factor $\mu$ will emphasize the misfit more
491 and typically creates a better fit to the data and a rougher density
492 distribution.
493 It is important to keep in mind that the regularization reduces noise in the
494 date and in fact gives the problem a unique solution.
495 Consequently, the trade-off factor $\mu$ may not be chosen too large in order
496 control the noise on the solution and ensure convergence in the iteration
497 process.
498
499 We can now actually run the inversion:
500 \begin{verbatim}
501 rho = inv.run()
502 \end{verbatim}
503 The answer as calculated during the inversion is returned and can be accessed
504 under the name \verb|rho|.
505 As pointed out earlier the iteration process may fail in which case the
506 execution of the script is aborted with an error message.
507
508 \section{Taking a Look}
509 In the final step of script~\ref{code: gravity1} the calculated density
510 distribution is written to an external file.
511 A popular file format used by several visualization packages such as
512 \VisIt~\cite{VISIT} and \mayavi~\cite{MAYAVI} is the \VTK file format.
513 The result of the inversion which has been named \verb|rho| can be written to
514 the file \file{result.vtu} by adding the statement
515 \begin{verbatim}
516 saveVTK("result.vtu", density=rho)
517 \end{verbatim}
518 at the end of script.
519 The inversion solution is tagged with the name \verb|density| in the result
520 file, however any other name for the tag could be used.
521 As the format is text-based (as opposed to binary) \VTK files tend to be very
522 large and take compute time to create, in particular when it comes to large
523 numbers of cells ($>10^6$).
524 For large problems it is more efficient to use the \SILO file format~\cite{SILO}.
525 \SILO files tend to be smaller and are faster generated and read.
526 It is the preferred format to import results into the visualization program
527 \VisIt~\cite{VISIT} which is particularly suited for the visualization of
528 large data sets.
529 Inversion results can directly exported into \SILO files using the statement
530 \begin{verbatim}
531 saveSilo("result.silo", density=rho)
532 \end{verbatim}
533 replacing the \verb|saveVTK(...)| statement.
534 Similar to \VTK files the result \verb|rho| is tagged with the name
535 \verb|density| so it can be identified in the visualization program.
536
537 \begin{figure}
538 \begin{center}
539 \subfigure[$\mu=0.1$]{%
540 \label{FIG:P1:GRAV:10 MU01}
541 \includegraphics[width=0.45\textwidth]{QLDGravContourMu01.png}
542 }%
543 \subfigure[$\mu=1.$]{%
544 \label{FIG:P1:GRAV:10 MU1}
545 \includegraphics[width=0.45\textwidth]{QLDGravContourMu1.png}
546 }\\ % ------- End of the first row ----------------------%
547 \subfigure[$\mu=10.$]{%
548 \label{FIG:P1:GRAV:10 MU10}
549 \includegraphics[width=0.45\textwidth]{QLDGravContourMu10.png}
550 }%
551 \subfigure[$\mu=100.$]{%
552 \label{FIG:P1:GRAV:10 MU100}
553 \includegraphics[width=0.45\textwidth]{QLDGravContourMu100.png}
554 }\\ % ------- End of the second row ----------------------%
555 \subfigure[$\mu=1000.$]{%
556 \label{FIG:P1:GRAV:10 MU1000}
557 \includegraphics[width=0.45\textwidth]{QLDGravContourMu1000.png}
558 }%
559 \end{center}
560 \caption{3-D contour plots of gravity inversion results with data from
561 Figure~\ref{FIG:P1:GRAV:0} for various values of the model trade-off
562 factor $\mu$. Visualization has been performed in \VisIt.
563 \AZADEH{check images.}}
564 \label{FIG:P1:GRAV:10}
565 \end{figure}
566
567 \begin{figure}
568 \begin{center}
569 \subfigure[$\mu=0.1$]{%
570 \label{FIG:P1:GRAV:11 MU01}
571 \includegraphics[width=0.45\textwidth]{QLDGravDepthMu01.png}
572 }%
573 \subfigure[$\mu=1.$]{%
574 \label{FIG:P1:GRAV:11 MU1}
575 \includegraphics[width=0.45\textwidth]{QLDGravDepthMu1.png}
576 }\\ % ------- End of the first row ----------------------%
577 \subfigure[$\mu=10.$]{%
578 \label{FIG:P1:GRAV:11 MU10}
579 \includegraphics[width=0.45\textwidth]{QLDGravDepthMu10.png}
580 }%
581 \subfigure[$\mu=100.$]{%
582 \label{FIG:P1:GRAV:11 MU100}
583 \includegraphics[width=0.45\textwidth]{QLDGravDepthMu100.png}
584 }\\ % ------- End of the second row ----------------------%
585 \subfigure[$\mu=1000.$]{%
586 \label{FIG:P1:GRAV:11 MU1000}
587 \includegraphics[width=0.45\textwidth]{QLDGravDepthMu1000.png}
588 }%
589 \end{center}
590 \caption{3-D slice plots of gravity inversion results with data from
591 Figure~\ref{FIG:P1:GRAV:0} for various values of the model trade-off
592 factor $\mu$. Visualization has been performed \VisIt.
593 \AZADEH{check images.}}
594 \label{FIG:P1:GRAV:11}
595 \end{figure}
596
597 Figures~\ref{FIG:P1:GRAV:10} and~\ref{FIG:P1:GRAV:11} show two different
598 styles of visualization generated in \VisIt using the result of the inversion
599 of the gravity anomalies shown in Figure~\ref{FIG:P1:GRAV:0}.
600 The inversions have been performed with different values for the model
601 trade-off factor $\mu$.
602 The visualization shows clearly the smoothing effect of lower values for the
603 trade-off factors.
604 For larger values of the trade-off factor the density distribution becomes
605 rougher showing larger details.
606 Computational costs are significantly higher for larger trade-off factors.
607 Moreover, noise in the data has a higher impact on the result.
608 Typically several runs are required to adjust the value for the trade-off
609 factor to the datasets used.
610
611 For some analysis tools it is useful to process the results in form of
612 Comma-separated Values (\CSV)\footnote{see
613 \url{http://en.wikipedia.org/wiki/Comma-separated_values}}.
614 Such a file can be created using the statement
615 \begin{verbatim}
616 saveDataCSV("result.csv", x=rho.getFunctionSpace().getX(), density=rho)
617 \end{verbatim}
618 in the script.
619 This will create a \file{result.csv} with columns separated by a comma.
620 Each row contains the value of the density distribution and the three
621 coordinates of the corresponding location in the domain.
622 There is a header specifying the meaning of the corresponding column.
623 Notice that rows are not written in a particular order and therefore, if
624 necessary, the user has to apply appropriate sorting of the rows.
625 Columns are written in alphabetic order of their corresponding tag names.
626 For the interested reader: the statement \verb|rho.getFunctionSpace()| returns
627 the type used to store the density data \verb|rho|.
628 The \verb|getX()| method returns the coordinates of the sampling points used
629 for the particular type of representation, see~\cite{ESCRIPT} for details.
630
631 \section{Remarks}
632
633 \subsection{ER Mapper Raster Files}\label{SEC:P1:GRAV:REMARK:ERMAPPER}
634 The \downunder module can read data stored in ER Mapper Raster files. A data
635 set in this format consists of two files, a header file whose name usually ends
636 in \verb|.ers| and the actual data file which has the same filename as the
637 header file but without any file extension.
638 These files are usually produced by a commercial software package and the
639 contents can be quite diverse.
640 Therefore, it is not guaranteed that every data set is supported by \downunder
641 but the most common types of raster data should work\footnote{If your data
642 does not load please contact us through \url{https://launchpad.net/escript-finley}.}.
643
644 The interface for loading ER Mapper files is very similar to the \netcdf
645 interface described in Section~\ref{SEC:P1:GRAV:DATA}.
646 To load gravity data stored in the file pair\footnote{These files are available
647 in the example directory.} \examplefile{GravitySmall.ers} (the
648 header) and \examplefile{GravitySmall} (the data) without changing any of the
649 defaults use:
650 \begin{verbatim}
651 source0=ErMapperData(ErMapperData.GRAVITY, 'GravitySmall.ers')
652 \end{verbatim}
653 If your data set does not follow the default naming convention you can specify
654 the name of the data file explicitly:
655 \begin{verbatim}
656 source0=ErMapperData(ErMapperData.GRAVITY, 'GravitySmall.ers',
657 datafile='GravityData')
658 \end{verbatim}
659 Please note that there is no way for the reader to determine if the two files
660 really form a pair so make sure to pass the correct filenames when constructing
661 the reader object.
662 The same optional arguments explained in sections \ref{SEC:P1:GRAV:DATA} and
663 \ref{SEC:P1:GRAV:REMARK:DATAHOLES} are available for ER Mapper data sets.
664 However, due to the limitation of the file format only a constant error value
665 is supported.
666
667 \subsection{Data With Holes}\label{SEC:P1:GRAV:REMARK:DATAHOLES}
668 As described previously in this chapter input data is always given in the form
669 of a rectangular grid with constant cell size in each dimension.
670 However, there are cases when this is not necessarily the case.
671 Consider an onshore data set which includes parts of the offshore region as in
672 Figure~\ref{FIG:P1:GRAV:onshoreoffshore}.
673 The valid data in this example has a value range of about $-600$ to $600$ and
674 the inversion is to be run based on these values only, disregarding the
675 offshore region.
676 In order to achieve that, the offshore region is \emph{masked} by using a
677 constant value which is not found within the onshore area.
678 Figure~\ref{FIG:P1:GRAV:onshoreoffshore} clearly shows this masked area in
679 dark blue since a mask value of $-1000$ was used.
680
681 \begin{figure}[ht]
682 \centering
683 \includegraphics[width=0.45\textwidth]{onshore-offshore.png}
684 \caption{Plot of a rectangular gridded onshore data set that includes
685 offshore regions which have a value (here $-1000$) not found within the
686 real data (Bouguer anomalies in Tasmania, courtesy Geoscience Australia)}
687 \label{FIG:P1:GRAV:onshoreoffshore}
688 \end{figure}
689
690 The \netcdf conventions supported in \downunder include a standard way of
691 specifying such a mask value.
692 The example script \examplefile{create_netcdf.py} demonstrates how this is
693 accomplished in an easy way with any data.
694 If, for any reason, the mask value in the input file is invalid it can be
695 overridden via the \verb|null_value| argument when constructing the
696 \verb|NetCdfData| object:
697 \begin{verbatim}
698 source0=NetCdfData(NetCdfData.GRAVITY, 'data0.nc', null_value=-1000)
699 \end{verbatim}
700 In this example, all data points that have a value of $-1000$ are ignored and
701 not used in the inversion.
702 Please note that the special value \emph{NaN} (not-a-number) is sometimes used
703 for the purposes of masking in data sets.
704 Areas marked with this value are always disregarded in \downunder.
705
706 \subsection{Multiple Data Sets}\label{SEC:P1:GRAV:REMARK:MULTIDATA}
707 It is possible to run a single inversion using more than one input data set,
708 possibly in different file formats.
709 To do so, simply create the data sources and add them to the domain builder:
710 \begin{verbatim}
711 source0=NetCdfData(NetCdfData.GRAVITY, 'data0.nc')
712 source1=ErMapperData(ErMapperData.GRAVITY, 'data1.ers')
713 dom.addSource(source0)
714 dom.addSource(source1)
715 \end{verbatim}
716 However, there are some restrictions when combining data sets:
717 \begin{itemize}
718 \item Due to the coordinate transformation all data sets must be located in
719 the same UTM zone. If a single dataset crosses UTM zones only the zone
720 of the central longitude is used when projecting.
721 For example, if one data set lies mostly in zone 51 but contains areas
722 of zone 52, it is transformed using zone 51. In this case more data
723 from zone 51 can be added, but not from any other zone.
724 \item All data sets should have the same spatial resolution but this is not
725 enforced. Combining data with different resolution is currently
726 considered experimental but works best when the resolutions are
727 multiples of each other. For example if the first data set has a
728 resolution (or cell size) of $100$ metres and the second has a cell
729 size of $50$ metres then the target domain will have a cell size of
730 $50$ metres (the finer resolution) and each point of the coarse data
731 will occupy two cells (in the respective dimension).
732 \end{itemize}
733
734 \subsection{Regularization Term}\label{SEC:P1:GRAV:REMARK:REG}
735 The \verb|GravityInversion| class supports the following form for the
736 regularization:
737 \begin{equation}
738 \int w^{(0)} \cdot \rho^2 + w^{(1)}_0 \rho_{,0}^2 + w^{(1)}_1 \rho_{,1}^2 + w^{(1)}_2 \rho_{,2}^2\; dx
739 \end{equation}
740 where the integral is calculated across the entire domain.
741 $\rho$ represents the density distribution where $\rho_{,0}$ $\rho_{,1}$ and
742 $\rho_{,2}$ are the spatial derivatives of $\rho$ with respect to the two
743 lateral and the vertical direction, respectively.
744 $w^{(0)}$, $w^{(1)}_0$, $w^{(1)}_1$ and $w^{(1)}_2$ are weighting
745 factors\footnote{A more general form, e.g. spatially variable values for the
746 weighting factors, is supported, see Part~\ref{part2}}.
747 By default these are $w^{(0)}=0$, $w^{(1)}_0=w^{(1)}_1=w^{(1)}_2=1$.
748 Other weighting factors can be set in the inversion set-up.
749 For instance to set $w^{(0)}=10$, $w^{(1)}_0=w^{(1)}_1=0$ and $w^{(1)}_2=100$
750 use the statement:
751 \begin{verbatim}
752 inv.setup(dom, w0=10, w1=[0,0,100])
753 \end{verbatim}
754 It is pointed out that the weighting factors are rescaled in order to improve
755 numerical stability. Therefore the relative size of the weighting factors is
756 relevant and using
757 \begin{verbatim}
758 inv.setup(dom, w0=0.1, w1=[0,0,1])
759 \end{verbatim}
760 would lead to the same regularization as the statement above.
761

  ViewVC Help
Powered by ViewVC 1.1.26