4 |
J(m) = J^{reg}(m) + \sum_{f} \mu^{data}_{f} \cdot J^{f}(p^f) |
J(m) = J^{reg}(m) + \sum_{f} \mu^{data}_{f} \cdot J^{f}(p^f) |

5 |
\end{equation} |
\end{equation} |

6 |
where $m$ represents the level set function, $J^{reg}$ is the regularization term, see Chapter~\ref{Chp:ref:regularization}, |
where $m$ represents the level set function, $J^{reg}$ is the regularization term, see Chapter~\ref{Chp:ref:regularization}, |

7 |
and $J^{f}$ are a set of forward problems, see Chapter~\ref{Chp:ref:forward models} depending of |
and $J^{f}$ are a set of cost functions for forward models, (see Chapter~\ref{Chp:ref:forward models}) depending on |

8 |
physical parameters $p^f$. The physical parameters $p^f$ are known functions |
physical parameters $p^f$. The physical parameters $p^f$ are known functions |

9 |
of the level set function $m$ which is the unknown to be calculated by the optimization process. |
of the level set function $m$ which is the unknown to be calculated by the optimization process. |

10 |
$\mu^{data}_{f}$ are trade-off factors. It is pointed out that the regularization term includes additional trade-off factors |
$\mu^{data}_{f}$ are trade-off factors. It is pointed out that the regularization term includes additional trade-off factors |

255 |
\end{array} |
\end{array} |

256 |
\end{equation} |
\end{equation} |

257 |
The calculation of the gradient of the forward model component is more complicated: |
The calculation of the gradient of the forward model component is more complicated: |

258 |
the data defect $J^{f}$ for forward model $f$ is expressed use a cost function kernel $K^{f}$ |
the data defect $J^{f}$ for forward model $f$ is expressed using a cost function kernel $K^{f}$ |

259 |
\begin{equation}\label{REF:EQU:INTRO 2bb} |
\begin{equation}\label{REF:EQU:INTRO 2bb} |

260 |
J^{f}(p^f) = \int_{\Omega} K^{f} \; dx |
J^{f}(p^f) = \int_{\Omega} K^{f} \; dx |

261 |
\end{equation} |
\end{equation} |