/[escript]/branches/doubleplusgood/doc/inversion/CostFunctions.tex
ViewVC logotype

Contents of /branches/doubleplusgood/doc/inversion/CostFunctions.tex

Parent Directory Parent Directory | Revision Log Revision Log


Revision 4345 - (show annotations)
Fri Mar 29 07:09:41 2013 UTC (5 years, 11 months ago) by jfenwick
File MIME type: application/x-tex
File size: 20753 byte(s)
Spelling fixes
1 \chapter{Cost Function}\label{chapter:ref:inversion cost function}
2 The general form of the cost function minimized in the inversion is given in the form (see also Chapter~\ref{chapter:ref:Drivers})
3 \begin{equation}\label{REF:EQU:DRIVE:10}
4 J(m) = J^{reg}(m) + \sum_{f} \mu^{data}_{f} \cdot J^{f}(p^f)
5 \end{equation}
6 where $m$ represents the level set function, $J^{reg}$ is the regularization term, see Chapter~\ref{Chp:ref:regularization},
7 and $J^{f}$ are a set of forward problems, see Chapter~\ref{Chp:ref:forward models} depending of
8 physical parameters $p^f$. The physical parameters $p^f$ are known functions
9 of the level set function $m$ which is the unknown to be calculated by the optimization process.
10 $\mu^{data}_{f}$ are trade-off factors. It is pointed out that the regularization term includes additional trade-off factors
11 The \class{InversionCostFunction} is class to define cost functions of an inversion. It is pointed out that
12 the \class{InversionCostFunction} class implements the \class{CostFunction} template class, see Chapter~\ref{chapter:ref:Minimization}.
13
14 In the simplest case there is a single forward model using a single physical parameter which is
15 derived form single-values level set function. The following script snippet shows the creation of the
16 \class{InversionCostFunction} for the case of a gravity inversion:
17 \begin{verbatim}
18 p=DensityMapping(...)
19 f=GravityModel(...)
20 J=InversionCostFunction(Regularization(...), \
21 mappings=p, \
22 forward_models=f)
23 \end{verbatim}
24 The argument \verb|...| refers to an appropriate argument list.
25
26 If two forward models are coming into play using two different physical parameters
27 the \member{mappings} and \member{forward_models} are defined as lists in the following form:
28 \begin{verbatim}
29 p_rho=DensityMapping(...)
30 p_k=SusceptibilityMapping(...)
31 f_mag=MagneticModel(...)
32 f_grav=GravityModel(...)
33
34 J=InversionCostFunction(Regularization(...), \
35 mappings=[p_rho, p_k], \
36 forward_models=[(f_mag, 1), (f_grav,0)])
37 \end{verbatim}
38 Here we define a joint inversion of gravity and magnetic data. \member{forward_models} is given as a list of
39 a tuple of a forward model and an index which referring to parameter in the \member{mappings} list to be used as an input.
40 The magnetic forward model \member{f_mag} is using the second parameter (=\member{p_k}) in \member{mappings} list.
41 In this case the physical parameters are defined by a single-valued level set function. It is also possible
42 to link physical parameters to components of a level set function:
43 \begin{verbatim}
44 p_rho=DensityMapping(...)
45 p_k=SusceptibilityMapping(...)
46 f_mag=MagneticModel(...)
47 f_grav=GravityModel(...)
48
49 J=InversionCostFunction(Regularization(numLevelSets=2,...), \
50 mappings=[(p_rho,0), (p_k,1)], \
51 forward_models=[[(f_mag, 1), (f_grav,0)])
52 \end{verbatim}
53 The \member{mappings} argument is now a list of pairs where the first pair entry specifies the parameter mapping and
54 the second pair entry specifies the index of the component of the level set function to be used to evaluate the parameter.
55 In this case the level set function has two components, where the density mapping uses the first component of the level set function
56 while the susceptibility mapping uses the second component.
57
58 \section{\class{InversionCostFunction} API}\label{chapter:ref:inversion cost function:api}
59
60 The \class{InversionCostFunction} implements a \class{CostFunction} class used
61 to run optimization solvers, see Section~\ref{chapter:ref:Minimization: costfunction class}.
62 Its API is defined as follows:
63
64 \begin{classdesc}{InversionCostFunction}{regularization, mappings, forward_models}
65 Constructor for the inversion cost function. \member{regularization} sets the regularization to be used, see Chapter~\ref{Chp:ref:regularization}.
66 \member{mappings} is a list of pairs where each pair comprises of a
67 physical parameter mapping (see Chapter~\ref{Chp:ref:mapping}) and an index which refers to the component of level set function
68 defined by the \member{regularization} to be used to calculate the corresponding physical parameter.
69 If the level set function has a single component the index can be omitted.
70 If in addition there is a single physical parameter the mapping can be given instead of a list.
71 \member{forward_models} is a list of pairs where the first pair component is a
72 forward model (see Chapter~\ref{Chp:ref:forward models}) and the second pair
73 component refers to the physical parameter in the \member{mappings} list
74 providing the physical parameter for the model.
75 If a single physical parameter is present the index can be omitted.
76 If in addition a single forward model is used this forward model can be
77 assigned to \member{forward_models} in replacement of a list.
78 \end{classdesc}
79
80 \begin{methoddesc}[InversionCostFunction]{getDomain}{}
81 returns the \escript domain of the inversion, see~\cite{ESCRIPT}.
82 \end{methoddesc}
83
84 \begin{methoddesc}[InversionCostFunction]{getNumTradeOffFactors}{}
85 returns the total number of trade-off factors.
86 The count includes the trade-off factors $\mu^{data}_{f}$ for the forward
87 models and (hidden) trade-off factors in the regularization term,
88 see Definition~\ref{REF:EQU:DRIVE:10}.
89 \end{methoddesc}
90
91 \begin{methoddesc}[InversionCostFunction]{getForwardModel}{\optional{idx=\None}}
92 returns the forward model with index \member{idx}.
93 If the cost function contains one model only argument \member{idx} can be omitted.
94 \end{methoddesc}
95
96 \begin{methoddesc}[InversionCostFunction]{getRegularization}{}
97 returns the regularization component of the cost function, see \class{regularization} in Chapter~\ref{Chp:ref:regularization}.
98 \end{methoddesc}
99
100 \begin{methoddesc}[InversionCostFunction]{setTradeOffFactorsModels}{\optional{mu=\None}}
101 sets the trade-off factors $\mu^{data}_{f}$ for the forward model components.
102 If a single model is present \member{mu} must be a floating point number.
103 Otherwise \member{mu} must be a list of floating point numbers.
104 It is assumed that all numbers are positive.
105 The default value for all trade-off factors is one.
106 \end{methoddesc}
107
108 \begin{methoddesc}[InversionCostFunction]{getTradeOffFactorsModels}{}
109 returns the values of the trade-off factors $\mu^{data}_{f}$ for the forward model components.
110 \end{methoddesc}
111
112 \begin{methoddesc}[InversionCostFunction]{setTradeOffFactorsRegularization}{\optional{mu=\None}, \optional{mu_c=\None}}
113 sets the trade-off factors for the regularization component of the cost function.
114 \member{mu} defines the trade-off factors for the level-set variation part and
115 \member{mu_c} sets the trade-off factors for the cross-gradient variation part.
116 This method is a shortcut for calling \member{setTradeOffFactorsForVariation}
117 and \member{setTradeOffFactorsForCrossGradient} for the underlying the
118 regularization.
119 Please see \class{Regularization} in Chapter~\ref{Chp:ref:regularization} for
120 more details on the arguments \member{mu} and \member{mu_c}.
121 \end{methoddesc}
122
123 \begin{methoddesc}[InversionCostFunction]{setTradeOffFactors}{\optional{mu=\None}}
124 sets the trade-off factors for the forward model and regularization terms.
125 \member{mu} is a list of positive floats. The length of the list is the total
126 number of trade-off factors given by the method \method{getNumTradeOffFactors}.
127 The first part of \member{mu} defines the trade-off factors $\mu^{data}_{f}$
128 for the forward model components while the remaining entries define the
129 trade-off factors for the regularization components of the cost function.
130 By default all values are set to one.
131 \end{methoddesc}
132
133 \begin{methoddesc}[InversionCostFunction]{getProperties}{m}
134 returns the physical properties from a given level set function \member{m}
135 using the mappings of the cost function. The physical properties are
136 returned in the order in which they are given in the \member{mappings} argument
137 in the class constructor.
138 \end{methoddesc}
139
140 \begin{methoddesc}[InversionCostFunction]{createLevelSetFunction}{*props}
141 returns the level set function corresponding to set of given physical properties.
142 This method is the inverse of the \method{getProperties} method.
143 The arguments \member{props} define a tuple of values for the physical
144 properties where the order needs to correspond to the order in which the
145 physical property mappings are given in the \member{mappings} argument in the
146 class constructor. If a value for a physical property is given as \None the
147 corresponding component of the returned level set function is set to zero.
148 If no physical properties are given all components of the level set function
149 are set to zero.
150 \end{methoddesc}
151
152 \begin{methoddesc}[InversionCostFunction]{getNorm}{m}
153 returns the norm of a level set function \member{m} as a floating point number.
154 \end{methoddesc}
155
156 \begin{methoddesc}[InversionCostFunction]{getArguments}{m}
157 returns pre-computed values for the evaluation of the cost function and its
158 gradient for a given value \member{m} of the level set function.
159 In essence the method collects pre-computed values for the underlying
160 regularization and forward models\footnote{Using pre-computed values can
161 significantly speed up the optimization process when the value of the cost
162 function and its gradient are needed for the same level set function.}.
163 \end{methoddesc}
164
165 \begin{methoddesc}[InversionCostFunction]{getValue}{m\optional{, *args}}
166 returns the value of the cost function for a given level set function \member{m}
167 and corresponding pre-computed values \member{args}.
168 If the pre-computed values are not supplied \member{getArguments} is called.
169 \end{methoddesc}
170
171 \begin{methoddesc}[InversionCostFunction]{getGradient}{m\optional{, *args}}
172 returns the gradient of the cost function at level set function \member{m}
173 using the corresponding pre-computed values \member{args}.
174 If the pre-computed values are not supplied \member{getArguments} is called.
175 The gradient is represented as a tuple $(Y,X)$ where in essence $Y$ represents
176 the derivative of the cost function kernel with respect to the level set
177 function and $X$ represents the derivative of the cost function kernel with
178 respect to the gradient of the level set function, see
179 Section~\ref{chapter:ref:inversion cost function:gradient} for more details.
180 \end{methoddesc}
181
182 \begin{methoddesc}[InversionCostFunction]{getDualProduct}{m, g}
183 returns the dual product of a level set function \member{m} with a gradient
184 \member{g}, see Section~\ref{chapter:ref:inversion cost function:gradient} for more details.
185 This method uses the dual product of the regularization.
186 \end{methoddesc}
187
188 \begin{methoddesc}[InversionCostFunction]{getInverseHessianApproximation}{m, g \optional{, *args}}
189 returns an approximative evaluation of the inverse of the Hessian operator of
190 the cost function for a given gradient \member{g} at a given level set function
191 \member{m} using the corresponding pre-computed values \member{args}.
192 If no pre-computed values are present \member{getArguments} is called.
193 In the current implementation contributions to the Hessian operator from the
194 forward models are ignored and only contributions from the regularization and
195 cross-gradient term are used.
196 \end{methoddesc}
197
198
199 \section{Gradient calculation}\label{chapter:ref:inversion cost function:gradient}
200 In this section we briefly discuss the calculation of the gradient and the Hessian operator.
201 If $\nabla$ denotes the gradient operator (with respect to the level set function $m$)
202 the gradient of $J$ is given as
203 \begin{equation}\label{REF:EQU:DRIVE:10b}
204 \nabla J(m) = \nabla J^{reg}(m) + \sum_{f} \mu^{data}_{f} \cdot \nabla J^{f}(p^f) \; .
205 \end{equation}
206 We first focus on the calculation of $\nabla J^{reg}$. In fact the
207 regularization cost function $J^{reg}$ is given through a cost function
208 kernel\index{cost function!kernel} $K^{reg}$ in the form
209 \begin{equation}\label{REF:EQU:INTRO 2a}
210 J^{reg}(m) = \int_{\Omega} K^{reg} \; dx
211 \end{equation}
212 where $K^{reg}$ is a given function of the
213 level set function $m_k$ and its spatial derivative $m_{k,i}$. If $n$ is an increment to the level set function
214 then the directional derivative of $J^{ref}$ in the direction of $n$ is given as
215 \begin{equation}\label{REF:EQU:INTRO 2aa}
216 <n, \nabla J^{reg}(m)> = \int_{\Omega} \frac{ \partial K^{reg}}{\partial m_k} n_k + \frac{ \partial K^{reg}}{\partial m_{k,i}} n_{k,i} \; dx
217 \end{equation}
218 where $<.,.>$ denotes the dual product, see Chapter~\ref{chapter:ref:Minimization}. Consequently, the gradient $\nabla J^{reg}$
219 can be represented by a pair of values $Y$ and $X$
220 \begin{equation}\label{ref:EQU:CS:101}
221 \begin{array}{rcl}
222 Y_k & = & \displaystyle{\frac{\partial K^{reg}}{\partial m_k}} \\
223 X_{ki} & = & \displaystyle{\frac{\partial K^{reg}}{\partial m_{k,i}}}
224 \end{array}
225 \end{equation}
226 while the dual product $<.,.>$ of a level set increment $n$ and a gradient increment $g=(Y,X)$ is given as
227 \begin{equation}\label{REF:EQU:INTRO 2aaa}
228 <n,g> = \int_{\Omega} Y_k n_k + X_{ki} n_{k,i} \; dx
229 \end{equation}
230 We also need to provide (an approximation of) the value $p$ of the inverse of the Hessian operator $\nabla \nabla J$
231 for a given gradient increment $g=(Y,X)$. This means we need to (approximatively) solve the variational problem
232 \begin{equation}\label{REF:EQU:INTRO 2b}
233 <n,\nabla \nabla J p > = \int_{\Omega} Y_k n_k + X_{ki} n_{k,i} \; dx
234 \end{equation}
235 for all increments $n$ of the level set function. If we ignore contributions
236 from the forward models the left hand side takes the form
237 \begin{equation}\label{REF:EQU:INTRO 2c}
238 <n,\nabla \nabla J^{reg} p > = \int_{\Omega}
239 \displaystyle{\frac{\partial Y_k}{\partial m_l}} p_l n_k +
240 \displaystyle{\frac{\partial Y_k}{\partial m_{l,j}}} p_{l,j} n_k +
241 \displaystyle{\frac{\partial X_{ki}}{\partial m_l}} p_l n_{k,i} +
242 \displaystyle{\frac{\partial X_{ki}}{\partial m_{l,j}}} p_{l,j} n_{k,i}
243 \; dx
244 \end{equation} We follow the concept as outlined in section~\ref{chapter:ref:inversion cost function:gradient}.
245 Notice that equation~\ref{REF:EQU:INTRO 2b} defines a system of linear PDEs
246 which is solved using \escript \class{LinearPDE} class. In the \escript notation we need to provide
247 \begin{equation}\label{ref:EQU:REG:600}
248 \begin{array}{rcl}
249 A_{kilj} & = & \displaystyle{\frac{\partial X_{ki}}{\partial m_{l,j}}} \\
250 B_{kil} & = & \displaystyle{\frac{\partial X_{ki}}{\partial m_l}} \\
251 C_{klj} & = & \displaystyle{\frac{\partial Y_k}{\partial m_{l,j}}} \\
252 D_{kl} & = & \displaystyle{\frac{\partial Y_k}{\partial m_l}} \\
253 \end{array}
254 \end{equation}
255 The calculation of the gradient of the forward model component is more complicated:
256 the data defect $J^{f}$ for forward model $f$ is expressed use a cost function kernel $K^{f}$
257 \begin{equation}\label{REF:EQU:INTRO 2bb}
258 J^{f}(p^f) = \int_{\Omega} K^{f} \; dx
259 \end{equation}
260 In this case the cost function kernel $K^{f}$ is a function of the
261 physical parameter $p^f$, which again is a function of the level-set function,
262 and the state variable $u^f_{k}$ and its gradient $u^f_{k,i}$. For the sake of a simpler
263 presentation the upper index $f$ is dropped.
264
265 The gradient $\nabla_{p} J$ of the $J$ with respect to
266 the physical property $p$ is given as
267 \begin{equation}\label{REF:EQU:costfunction 100b}
268 <q, \nabla_{p} J(p)> = \int_{\Omega}
269 \displaystyle{\frac{\partial K }{\partial u_k } } \displaystyle{\frac{\partial u_k }{\partial q } } +
270 \displaystyle{\frac{\partial K }{\partial u_{k,i} } } \left( \displaystyle{\frac{\partial u_k }{\partial q } } \right)_{,i}+
271 \displaystyle{\frac{\partial K }{\partial p } } q \; dx
272 \end{equation}
273 for any $q$ as an increment to the physical parameter $p$. If the change
274 of the state variable
275 $u_f$ for physical parameter $p$ in the direction of $q$ is denoted as
276 \begin{equation}\label{REF:EQU:costfunction 100c}
277 d_k =\displaystyle{\frac{\partial u_k }{\partial q } }
278 \end{equation}
279 equation~\ref{REF:EQU:costfunction 100b} can be written as
280 \begin{equation}\label{REF:EQU:costfunction 100d}
281 <q, \nabla_{p} J(p)> = \int_{\Omega}
282 \displaystyle{\frac{\partial K }{\partial u_k } } d_k +
283 \displaystyle{\frac{\partial K }{\partial u_{k,i} } } d_{k,i}+
284 \displaystyle{\frac{\partial K }{\partial p } } q \; dx
285 \end{equation}
286 The state variable are the solution of PDE which in variational from is given
287 \begin{equation}\label{REF:EQU:costfunction 100}
288 \int_{\Omega} F_k \cdot r_k + G_{li} \cdot r_{k,i} \; dx = 0
289 \end{equation}
290 for all increments $r$ to the stat $u$. The functions $F$ and $G$ are given and describe the physical
291 model. They depend of the state variable $u_{k}$ and its gradient $u_{k,i}$ and the physical parameter $p$. The change
292 $d_k$ of the state
293 $u_f$ for physical parameter $p$ in the direction of $q$ is given from the equation
294 \begin{equation}\label{REF:EQU:costfunction 100bb}
295 \int_{\Omega}
296 \displaystyle{\frac{\partial F_k }{\partial u_l } } d_l r_k +
297 \displaystyle{\frac{\partial F_k }{\partial u_{l,j}} } d_{l,j} r_k +
298 \displaystyle{\frac{\partial F_k }{\partial p} }q r_k +
299 \displaystyle{\frac{\partial G_{ki}}{\partial u_l} } d_l r_{k,i} +
300 \displaystyle{\frac{\partial G_{ki}}{\partial u_{l,j}} } d_{l,j} r_{k,i}+
301 \displaystyle{\frac{\partial G_{ki}}{\partial p} } q r_{k,i}
302 \; dx = 0
303 \end{equation}
304 to be fulfilled for all functions $r$. Now let $d^*_k$ be the solution of the
305 variational equation
306 \begin{equation}\label{REF:EQU:costfunction 100dd}
307 \int_{\Omega}
308 \displaystyle{\frac{\partial F_k }{\partial u_l } } h_l d^*_k +
309 \displaystyle{\frac{\partial F_k }{\partial u_{l,j}} } h_{l,j} d^*_k +
310 \displaystyle{\frac{\partial G_{ki}}{\partial u_l} } h_l d^*_{k,i} +
311 \displaystyle{\frac{\partial G_{ki}}{\partial u_{l,j}} } h_{l,j} d^*_{k,i}
312 \; dx
313 = \int_{\Omega}
314 \displaystyle{\frac{\partial K }{\partial u_k } } h_k +
315 \displaystyle{\frac{\partial K }{\partial u_{k,i} } } h_{k,i} \; dx
316 \end{equation}
317 for all increments $h_k$ to the physical property $p$. This problem
318 is solved using \escript \class{LinearPDE} class. In the \escript notation we need to provide
319 \begin{equation}\label{ref:EQU:REG:600b}
320 \begin{array}{rcl}
321 A_{kilj} & = & \displaystyle{\frac{\partial G_{lj}}{\partial u_{k,i}} } \\
322 B_{kil} & = & \displaystyle{\frac{\partial F_l }{\partial u_{k,i}} } \\
323 C_{klj} & = & \displaystyle{\frac{\partial G_{lj}}{\partial u_k} } \\
324 D_{kl} & = & \displaystyle{\frac{\partial F_l }{\partial u_k } } \\
325 Y_{k} & = & \displaystyle{\frac{\partial K }{\partial u_k } } \\
326 X_{ki} & = & \displaystyle{\frac{\partial K }{\partial u_{k,i} } } \\
327 \end{array}
328 \end{equation}
329 Notice that these coefficient are transposed to the coefficients used to solve for the
330 state variables in equation~\ref{REF:EQU:costfunction 100}.
331
332 Setting $h_l=d_l$ in equation~\ref{REF:EQU:costfunction 100d} and
333 $r_k=d^*_k$ in equation~\ref{REF:EQU:costfunction 100b} one gets
334 \begin{equation}\label{ref:EQU:costfunction:601}
335 \int_{\Omega}
336 \displaystyle{\frac{\partial K }{\partial u_k } } d_k +
337 \displaystyle{\frac{\partial K }{\partial u_{k,i} } } d_{k,i}+
338 \displaystyle{\frac{\partial F_k }{\partial p} } q d^*_k +
339 \displaystyle{\frac{\partial G_{ki}}{\partial p} } q d^*_{k,i}
340 \; dx = 0
341 \end{equation}
342 which is inserted into equation~\ref{REF:EQU:costfunction 100d} to get
343 \begin{equation}\label{REF:EQU:costfunction 602}
344 <q, \nabla_{p} J(p)> = \int_{\Omega} \left(
345 \displaystyle{\frac{\partial K }{\partial p } } - \displaystyle{\frac{\partial F_k }{\partial p} } d^*_k
346 - \displaystyle{\frac{\partial G_{ki}}{\partial p} } d^*_{k,i} \right) q \; dx
347 \end{equation}
348 We need in fact the gradient of $J^f$ with respect to the level set function which is given as
349 \begin{equation}\label{REF:EQU:costfunction 603}
350 <n, \nabla J^f> = \int_{\Omega} \left(
351 \displaystyle{\frac{\partial K^f}{\partial p^f} } - \displaystyle{\frac{\partial F^f_k }{\partial p^f} } d^{f*}_k
352 - \displaystyle{\frac{\partial G^f_{ki}}{\partial p^f} } d^{f*}_{k,i} \right)
353 \cdot \displaystyle{\frac{\partial p^f }{\partial m_l} } n_l \; dx
354 \end{equation}
355 for any increment $n$ to the level set function. So in summary we get
356 \begin{equation}\label{ref:EQU:CS:101b}
357 \begin{array}{rcl}
358 Y_k & = & \displaystyle{\frac{\partial K^{reg}}{\partial m_k}} +
359 \sum_{f} \mu^{data}_{f} \left(
360 \displaystyle{\frac{\partial K^f}{\partial p^f} } - \displaystyle{\frac{\partial F^f_l }{\partial p^f} } d^{f*}_l
361 - \displaystyle{\frac{\partial G^f_{li}}{\partial p^f} } d^{f*}_{l,i} \right)
362 \cdot \displaystyle{\frac{\partial p^f }{\partial m_k} }
363
364 \\
365 X_{ki} & = & \displaystyle{\frac{\partial K^{reg}}{\partial m_{k,i}}}
366 \end{array}
367 \end{equation}
368 to represent $\nabla J$ as the tuple $(Y,X)$. Contributions of the forward model to the
369 Hessian operator are ignored.
370

  ViewVC Help
Powered by ViewVC 1.1.26