/[escript]/trunk/doc/user/stokessolver.tex
ViewVC logotype

Contents of /trunk/doc/user/stokessolver.tex

Parent Directory Parent Directory | Revision Log Revision Log


Revision 2719 - (show annotations)
Wed Oct 14 06:38:03 2009 UTC (11 years ago) by gross
File MIME type: application/x-tex
File size: 18816 byte(s)
a new Stokes solver added
1 \section{The Stokes Problem}
2 \label{STOKES PROBLEM}
3 In this section we discuss how to solve the Stokes problem which is defined as follows:
4
5 We want to calculate the velocity \index{velocity} field $v$ and pressure $p$ of an incompressible fluid \index{incompressible fluid}. They are given as the solution of the Stokes problem\index{Stokes problem}
6 \begin{equation}\label{Stokes 1}
7 -\left(\eta(v\hackscore{i,j}+ v\hackscore{i,j})\right)\hackscore{,j}+p\hackscore{,i}=f\hackscore{i}-\sigma\hackscore{ij,j}
8 \end{equation}
9 where $f\hackscore{i}$ defines an internal force \index{force, internal} and $\sigma\hackscore{ij}$ is an intial stress \index{stress, initial}. The viscosity $\eta$ may weakly depend on pressure and velocity. If relevant we will use the notation $\eta(v,p)$ to express this dependency.
10
11 We assume an incompressible media:
12 \begin{equation}\label{Stokes 2}
13 -v\hackscore{i,i}=0
14 \end{equation}
15 Natural boundary conditions are taken in the form
16 \begin{equation}\label{Stokes Boundary}
17 \left(\eta(v\hackscore{i,j}+ v\hackscore{i,j})\right)n\hackscore{j}-n\hackscore{i}p=s\hackscore{i} - \alpha \cdot n\hackscore{i} n\hackscore{j} v\hackscore{j}+\sigma\hackscore{ij} n\hackscore{j}
18 \end{equation}
19 which can be overwritten by constraints of the form
20 \begin{equation}\label{Stokes Boundary0}
21 v\hackscore{i}(x)=v^D\hackscore{i}(x)
22 \end{equation}
23 at some locations $x$ at the boundary of the domain. $s\hackscore{i}$ defines a normal stress and
24 $\alpha\ge 0$ the spring constant for restoring normal force.
25 The index $i$ may depend on the location $x$ on the boundary.
26 $v^D$ is a given function on the domain.
27
28 \subsection{Solution Method \label{STOKES SOLVE}}
29 In block form equation equations~\ref{Stokes 1} and~\ref{Stokes 2} takes the form of a saddle point problem
30 \index{saddle point problem}
31 \begin{equation}
32 \left[ \begin{array}{cc}
33 A & B^{*} \\
34 B & 0 \\
35 \end{array} \right]
36 \left[ \begin{array}{c}
37 v \\
38 p \\
39 \end{array} \right]
40 =\left[ \begin{array}{c}
41 G \\
42 0 \\
43 \end{array} \right]
44 \label{SADDLEPOINT}
45 \end{equation}
46 where $A$ is coercive (assuming $A$ is not depending on $v$ or $p$), self-adjoint linear operator in a suitable Hilbert space, $B$ is the $(-1) \cdot$ divergence operator and $B^{*}$ is it adjoint operator (=gradient operator). For more details on the mathematics see references \cite{AAMIRBERKYAN2008,MBENZI2005}.
47 We use iterative techniques to solve this problem: Given an approximation $v$ and $p$ for
48 velocity and pressure we perform the following steps in the Uzawa scheme \index{Uzawa scheme} style:
49 \begin{enumerate}
50 \item calculate viscosity $\eta(v,p)$ and assemble operator $A$ from $\eta$.
51 \item Solve for $dv$:
52 \begin{equation}
53 A dv = G - A v - B^{*} p \label{SADDLEPOINT ITER STEP 1}
54 \end{equation}
55 \item update $v\hackscore{1}= v+ dv$
56 \item if $\max(\|Bv\hackscore{1}\|\hackscore{0},\|dv\|\hackscore{1}) \le \tau \cdot \|v\hackscore{1}\|\hackscore{1}$: return v$\hackscore{1},p$
57 \item Solve for $dp$:
58 \begin{equation}
59 \begin{array}{rcl}
60 B A^{-1} B^{*} dp & = & Bv\hackscore{1} \\
61 A dv\hackscore{2} & = & B^{*} dp
62 \end{array}
63 \label{SADDLEPOINT ITER STEP 2}
64 \end{equation}
65 \item update $p\hackscore{2}\leftarrow p+ dp$ and $v\hackscore{2}= v\hackscore{1} - dv\hackscore{2}$
66 \item goto Step 0 with $v=v\hackscore{2}$ and $p=p\hackscore{2}$.
67 \end{enumerate}
68 where $\tau$ is the given tolerance. Notice that the operator $A$ if it is depending on $v$ or $p$ is updated once only. In this algorithm
69 \begin{equation}
70 \|v\|\hackscore{1}^2 = \int\hackscore{\Omega} v\hackscore{j,k}v\hackscore{j,k} \; dx
71 \mbox{ and }
72 \|p\|\hackscore{0}^2= \int\hackscore{\Omega} p^2 \; dx.
73 \label{STOKES STOP}
74 \end{equation}
75 so the stopping criterion used is checking for convergence as well as for
76 the fact that the incompressiblity condition~\ref{Stokes 2} is fullfilled.
77
78 To solve the update step~\ref{SADDLEPOINT ITER STEP 2} we use an iterative solver with iteration
79 operator $B A^{-1} B^{*}$, eg. using generalized minimal residual method (GMRES) \index{linear solver!GMRES}\index{GMRES}, preconditioned conjugate gradient method (PCG) \index{linear solver!PCG}\index{PCG}. As suitable preconditioner \index{preconditioner} for this operator is $\frac{1}{\eta}$, ie
80 the evaluation of the preconditioner $P$ for a given pressure increment $q$ is the solution of
81 \begin{equation} \label{P PREC}
82 \frac{1}{\eta} Pq = q \; .
83 \end{equation}
84 Note that in each evaluation of the iteration operator $q=B A^{-1} B^{*} s$ one needs to solve
85 the problem
86 \begin{equation} \label{P OPERATOR}
87 A w = B^{*} s
88 \end{equation}
89 with sufficient accuracy to return $q=Bw$. Notice that the residual $r$ is given as
90 \begin{equation} \label{STOKES RES }
91 r= B (v\hackscore{1} - B A^{-1} B^{*} dp) = B (v\hackscore{1} - A^{-1} B^{*} dp) = B (v\hackscore{1}-dv\hackscore{2}) = B v\hackscore{2}
92 \end{equation}
93 so in fact the residual $r$ is represented by the updated velocity $v\hackscore{2}$. This saves the recovery of
94 $dv\hackscore{2}$ in~\ref{SADDLEPOINT ITER STEP 2} after $dp$ has been calculated as iterative method such as PCG calculate the solution approximations along with the their residual. In PCG the iteration is terminated if
95 \begin{equation} \label{P OPERATOR}
96 \| P^{\frac{1}{2}}B v\hackscore{2} \|\hackscore{0} \le \tau\hackscore{2} \| P^{\frac{1}{2}}B v\hackscore{1} \|\hackscore{0}
97 \end{equation}
98 where $\tau\hackscore{2}$ is the given tolerane. The update step~\ref{P OPERATOR} involves the
99 solution of a sparse matrix problem. We use the tolerance $\tau\hackscore{2}^2$ in order to make sure that any
100 error from solving this problem does not interfere with the PCG iteration.
101
102
103
104 We need to think about appropriate stopping criteria when solving
105 \ref{SADDLEPOINT ITER STEP 1}, \ref{SADDLEPOINT ITER STEP 2} and~\ref{P OPERATOR}. We would like to use very weak convergence criteria to reduce computational costs but they need to be tight enough to not interfere with the
106 convergence of the iteration process one level above. From Equation~\ref{SADDLEPOINT ITER STEP 1} we have
107 \begin{equation}
108 v\hackscore{1} = e\hackscore{1} + A^{-1} ( G - B^{*} p )
109 \end{equation}
110 We will use a sparse matrix solver so we have not full control on the norm $\|.\|\hackscore{s}$ used in the stopping criteria
111 \begin{equation}
112 \| G - A v\hackscore{1} - B^{*} p \|\hackscore{s} \le \tau\hackscore{1} \| G - A v - B^{*} p \|\hackscore{s}
113 \end{equation}
114 This translates into the conditoon
115 \begin{equation}
116 \| e\hackscore{1} \|\hackscore{1} \le K \tau\hackscore{1} \| dv\hackscore{1} - e\hackscore{1} \|\hackscore{1}
117 \mbox{ therefore }
118 \| e\hackscore{1} \|\hackscore{1} \le \frac{K \tau\hackscore{1}}{1-K \tau\hackscore{1}} \| dv\hackscore{1} \|\hackscore{1}
119 \end{equation}
120 The constant $K$ represents some uncertainty combining a variety of unknown factors such as the
121 solver being used and the condition number of the stiffness matrix.
122
123 From the first equation of~\ref{SADDLEPOINT ITER STEP 2} we have
124 \begin{equation}
125 p\hackscore{2} = p + (B A^{-1} B^{*})^{-1} (e\hackscore{2} + Bv\hackscore{1} ) =
126 (B A^{-1} B^{*})^{-1} ( e\hackscore{2} + B e\hackscore{1} + B A^{-1} G)
127 \end{equation}
128 and simlar
129 \begin{equation}
130 v\hackscore{2} = v\hackscore{1} - dv\hackscore{2}
131 = v\hackscore{1} - A^{-1} B^{*}dp
132 = e\hackscore{1} + A^{-1} ( G - B^{*} p ) - A^{-1} B^{*} (p\hackscore{2}-p)
133 = e\hackscore{1} + A^{-1} ( G - B^{*} p\hackscore{2})
134 \end{equation}
135 This shows that we can write the iterative process as a fixed point iteration to solve (assume all errors are zero)
136 \begin{equation}
137 v = \Phi(v,p) \mbox{ and } p = \Psi(u,p)
138 \end{equation}
139 where
140 \begin{equation}
141 \begin{array}{rcl}
142 \Psi(v,p) & = & (B A^{-1} B^{*})^{-1} B A^{-1} G \\
143 \Phi(u,p) & = & A^{-1} ( G - B^{*} (B A^{-1} B^{*})^{-1} B A^{-1} G )
144 \end{array}
145 \end{equation}
146 Notice that if $A$ is independent from $v$ and $p$ the operators $\Phi(v,p)$ and $\Psi(u,p)$ are constant
147 and threfore the iteration will terminate - providing no termination errors in sub-iterations - after one step.
148 We also can give a formula for the error
149 \begin{equation}
150 \begin{array}{rcl}
151 \delta p & = & (B A^{-1} B^{*})^{-1} ( e\hackscore{2} + B e\hackscore{1} ) \\
152 \delta v & = & e\hackscore{1} - A^{-1} B^{*}\delta p
153 \end{array}\label{STOKES ERRORS}
154 \end{equation}
155 Notice that $B\delta v = - e\hackscore{2}=-Bv\hackscore{2}$
156 With this notation
157 \begin{equation}
158 \begin{array}{rcl}
159 v\hackscore{2} = \Phi(v,p) + \delta v \\
160 p\hackscore{2} = \Psi(v,p) + \delta p \\
161 \end{array}
162 \end{equation}
163 We use the $dv=v\hackscore{2}-v$ and $Bv\hackscore{1}=B A^{-1} B^{*} dp$ to measure the error of the
164 current approximation $v\hackscore{2}$ and $v\hackscore{2}$ towards the exact solution.
165 Assuming that the iteration does converge with a convergence rate $\chi^{-}$ we have the estimate
166 \begin{equation}
167 \max(\|dv\|\hackscore{1}, \|Bv\hackscore{1}\|\hackscore{0})
168 \le \chi^{-} \max(\|dv^{-}\|\hackscore{1}, \|Bv\hackscore{1}^{-}\|\hackscore{0})
169 + \max(\|\delta v\|\hackscore{1} + \|(B A^{-1} B^{*}) \delta p\|\hackscore{0})
170 \end{equation}
171 were the upper index $-$ referes to the increments in the last step.
172 Now we try to establish estimates for $\|\delta v\|$ and $\|Bv\hackscore{1}\|$ from
173 formulas~\ref{STOKES ERRORS} where neglect the $\delta p$ in the equation for $\delta v$ assuming that
174 $\delta p$ is controlled.
175 \begin{equation}
176 \|\delta v\|\hackscore{1} \approx \|e\hackscore{1}\|\hackscore{1} \le \frac{K \tau\hackscore{1}}{1-K \tau\hackscore{1}} \| dv\hackscore{1} \|\hackscore{1} \approx \frac{K \tau\hackscore{1}}{1-K \tau\hackscore{1}} \| dv\|\hackscore{1}
177 \end{equation}
178 and
179 \begin{equation}
180 \|(B A^{-1} B^{*}) \delta p\|\hackscore{0} \approx \| e\hackscore{2} \|\hackscore{0}
181 = \| B v\hackscore{2} \|\hackscore{0} \le M \tau\hackscore{2} \| B v\hackscore{1} \|\hackscore{0}
182 \end{equation}
183 which leads to
184 \begin{equation}
185 \max(\|dv\|\hackscore{1}, \|Bv\hackscore{1}\|\hackscore{0})
186 \le \frac{\chi^{-}}{1-max{(M \tau\hackscore{2}, \frac{K \tau\hackscore{1}}{1-K \tau\hackscore{1}})}} \max(\|dv^{-}\|\hackscore{1}, \|Bv\hackscore{1}^{-}\|\hackscore{0}) \label{STOKES ESTIMTED IMPROVEMENT}
187 \end{equation}
188 where the upper index $-$ refers to the previous iteration step.
189 If we allow require $max{(M \tau\hackscore{2}, \frac{K \tau\hackscore{1}}{1-K \tau\hackscore{1}})}\le \gamma$
190 with a given $0<\gamma<1$ we can set
191 \begin{equation} \label{STOKES SET TOL}
192 \tau\hackscore{2} = \frac{\gamma}{M} \mbox{ and } \tau\hackscore{1} = \frac{1}{K} \frac{\gamma}{1+\gamma}
193 \end{equation}
194 Problem is that we do not know $M$ and $K$ but can use the convergence rate
195 \begin{equation}
196 \chi := \frac{\max(\|dv\|\hackscore{1}, \|Bv\hackscore{1}\|\hackscore{0})}{\max(\|dv^{-}\|\hackscore{1}, \|Bv\hackscore{1}^{-}\|\hackscore{0})}
197 \end{equation}
198 to construct estimates of $M$ and $K$. We look at the two cases where our prediction and choice of
199 the tolerances was good or where we went wrong:
200 \begin{equation}
201 \chi \le \frac{\chi^{-}}{1-\gamma} \mbox{ or }
202 \chi = \frac{\chi^{-}}{1-\gamma\hackscore{0}} >\frac{\chi^{-}}{1-\gamma}.
203 \end{equation}
204 which translates to
205 \begin{equation}
206 \gamma\hackscore{0} \le \gamma \mbox{ or } \gamma\hackscore{0}=1-\frac{\chi^{-}}{ \chi}>\gamma>0
207 \end{equation}
208 In the second case use \ref{STOKES SET TOL} for $\gamma=\gamma\hackscore{0}$ to get new estimates $M^{+}$ and $K^{+}$
209 for $M$ and $K$:
210 \begin{equation} \label{TOKES CONST UPDATE}
211 M^{+} =
212 \frac{\gamma\hackscore{0}}{\gamma} M
213 \mbox{ and } K^{+} =
214 \frac{\gamma\hackscore{0}}{\gamma}
215 \frac{1+\gamma}{1+\gamma\hackscore{0}}
216 K
217 \mbox{ with } \gamma\hackscore{0}=\max(1-\frac{\chi^{-}}{ \chi},\gamma)
218 \end{equation}
219 With these updated constants we can now get better values for the tolerances via an updated value $\gamma^{+}$ for $\gamma$ and the corrected values $M^{+}$ and $K^{+}$ for $M$ and $K$. If we are in the case of convergence which we
220 meassure by
221 \begin{equation}
222 \chi < \chi\hackscore{max}
223 \end{equation}
224 where $\chi\hackscore{max}$ is given value with $0<\chi\hackscore{max}<1$. We then consider the following
225 criteria
226 \begin{itemize}
227 \item As we are in case of convergence we try to relax the tolerances by increasing $\gamma$ by a factor $\frac{1}{\omega}$ ($0<\omega<1$). So we would like to choose
228 $\gamma^{+} \ge \frac{\gamma}{\omega}$.
229 \item We need to make sure that the estimated convergence rate based on $\gamma^{+}$ will still maintain convergence
230 \begin{equation}
231 \frac{\chi}{1-\gamma^{+}} \le \chi\hackscore{max}
232 \end{equation}
233 which translates into
234 \begin{equation}
235 \gamma^{+} \le 1 - \frac{\chi}{\chi\hackscore{max}}
236 \end{equation}
237 Notice that the right hand side is positive.
238 \item We do not want to iterate more then necessary. So we would like reduce the error in the next just below the
239 desired tolerance:
240 \begin{equation}
241 \frac{\chi}{1-\gamma^{+}}\max(\|dv\|\hackscore{1}, \|Bv\hackscore{1}\|\hackscore{0}) \ge f \cdot \tau \|v\hackscore{2}\|\hackscore{1}
242 \end{equation}
243 where the left hand side provides an estimate for the error to prediced in the next step. $f$ is a
244 safty factor. This leads to
245 \begin{equation}
246 \gamma^{+} \le
247 1 -
248 \chi \frac{\max(\|dv\|\hackscore{1}, \|Bv\hackscore{1}\|\hackscore{0})} { f \cdot \tau \|v\hackscore{2}\|\hackscore{1}}
249 \end{equation}
250 \end{itemize}
251 Putting all these criteria together we get
252 \begin{equation}
253 \gamma^{+}=\min(\max(
254 \frac{\gamma}{\omega},
255 1 -
256 \chi \frac{\max(\|dv\|\hackscore{1}, \|Bv\hackscore{1}\|\hackscore{0})} { f \cdot \tau \|v\hackscore{2}\|\hackscore{1}}), 1 - \frac{\chi}{\chi\hackscore{max}})
257 \label{STOKES SET GAMMA PLUS}
258 \end{equation}
259 In case we cannot see convergence $\gamma$ is reduced by the factor $\omega$
260 \begin{equation}
261 \gamma^{+}=\max(\omega \cdot \gamma, \gamma\hackscore{min})\label{STOKES SET GAMMA PLUS2}
262 \end{equation}
263 where $\gamma\hackscore{min}$ is introduced to make sure that $\gamma^{+}$ stays away from zero.
264
265
266 Appling~\ref{STOKES SET TOL} for $\gamma^{+}$ and~\ref{TOKES CONST UPDATE} we get the update formula
267 \begin{equation} \label{STOKES CONST UPDATE}
268 \tau\hackscore{2}^{+} =
269 \frac{\gamma^{+}}{\gamma\hackscore{0}} \tau\hackscore{2}
270 \mbox{ and } \tau\hackscore{1}^{+} =
271 \frac{\gamma^{+}}{\gamma\hackscore{0}}
272 \frac{1+\gamma\hackscore{0}}{1+\gamma^{+}}
273 \tau\hackscore{1}
274 \end{equation}
275 to get the new tolerances $\tau\hackscore{1}^{+}$ and $\tau\hackscore{2}^{+}$ to be used in the next iteration step.
276 Notice that the coefficients $M$ and $K$ have been eliminated. The calculation of $\gamma\hackscore{0}$ requires
277 at least two iteration steps (or a previous time step).
278
279 Typical initial values are
280 \begin{equation} \label{STOKES VALUES}
281 \tau\hackscore{1}=\tau\hackscore{1}=\frac{1}{100} ; \; \omega=\frac{1}{2} \; \gamma=\frac{1}{10} ;\gamma\hackscore{min}=10^{-6}
282 \end{equation}
283 where after the first step we set $\gamma\hackscore{0}=\gamma$ as no $\chi^{-}$ is available.
284
285
286 \subsection{Functions}
287
288 \begin{classdesc}{StokesProblemCartesian}{domain}
289 opens the Stokes problem\index{Stokes problem} on the \Domain domain. The approximation
290 order needs to be two. Order two will be used to approximate the velocity and order one
291 to approximate the pressure.
292 \end{classdesc}
293
294 \begin{methoddesc}[StokesProblemCartesian]{initialize}{\optional{f=Data(), \optional{fixed_u_mask=Data(), \optional{eta=1, \optional{surface_stress=Data(), \optional{stress=Data()}, \optional{
295 restoration_factor=0}}}}}}
296 assigns values to the model parameters. In any call all values must be set.
297 \var{f} defines the external force $f$, \var{eta} the viscosity $\eta$,
298 \var{surface_stress} the surface stress $s$ and \var{stress} the initial stress $\sigma$.
299 The locations and compontents where the velocity is fixed are set by
300 the values of \var{fixed_u_mask}. \var{restoration_factor} defines the restoring force factor $\alpha$.
301 The method will try to cast the given values to appropriate
302 \Data class objects.
303 \end{methoddesc}
304
305 \begin{methoddesc}[StokesProblemCartesian]{solve}{v,p
306 \optional{, max_iter=100 \optional{, verbose=False \optional{, usePCG=True }}}}
307 solves the problem and return approximations for velocity and pressure.
308 The arguments \var{v} and \var{p} define initial guess. The values of \var{v} marked
309 by \var{fixed_u_mask} remain unchanged.
310 If \var{usePCG} is set to \True
311 reconditioned conjugate gradient method (PCG) \index{preconditioned conjugate gradient method!PCG} scheme is used. Otherwise the problem is solved generalized minimal residual method (GMRES) \index{generalized minimal residual method!GMRES}. In most cases
312 the PCG scheme is more efficient.
313 \var{max_iter} defines the maximum number of iteration steps.
314
315 If \var{verbose} is set to \True informations on the progress of of the solver are printed.
316 \end{methoddesc}
317
318
319 \begin{methoddesc}[StokesProblemCartesian]{setTolerance}{\optional{tolerance=1.e-4}}
320 sets the tolerance in an appropriate norm relative to the right hand side. The tolerance must be non-negative and less than 1.
321 \end{methoddesc}
322 \begin{methoddesc}[StokesProblemCartesian]{getTolerance}{}
323 returns the current relative tolerance.
324 \end{methoddesc}
325 \begin{methoddesc}[StokesProblemCartesian]{setAbsoluteTolerance}{\optional{tolerance=0.}}
326 sets the absolute tolerance for the error in the relevant norm. The tolerance must be non-negative. Typically the
327 absolute talerance is set to 0.
328 \end{methoddesc}
329
330 \begin{methoddesc}[StokesProblemCartesian]{getAbsoluteTolerance}{}
331 sreturns the current absolute tolerance.
332 \end{methoddesc}
333
334 \begin{methoddesc}[StokesProblemCartesian]{getSolverOptionsVelocity}{}
335 returns the solver options used solve the equations~(\ref{V CALC}) for velocity.
336 \end{methoddesc}
337
338 \begin{methoddesc}[StokesProblemCartesian]{getSolverOptionsPressure}{}
339 returns the solver options used solve the equation~(\ref{P PREC}) for pressure.
340 \end{methoddesc}
341
342 \begin{methoddesc}[StokesProblemCartesian]{getSolverOptionsDiv}{}
343 set the solver options for solving the equation to project the divergence of the velocity onto the function space of pressure.
344 \end{methoddesc}
345
346
347 \subsection{Example: Lit Driven Cavity}
348 The following script \file{lit\hackscore driven\hackscore cavity.py}
349 \index{scripts!\file{helmholtz.py}} which is available in the \ExampleDirectory
350 illustrates the usage of the \class{StokesProblemCartesian} class to solve
351 the lit driven cavity problem:
352 \begin{python}
353 from esys.escript import *
354 from esys.finley import Rectangle
355 from esys.escript.models import StokesProblemCartesian
356 NE=25
357 dom = Rectangle(NE,NE,order=2)
358 x = dom.getX()
359 sc=StokesProblemCartesian(dom)
360 mask= (whereZero(x[0])*[1.,0]+whereZero(x[0]-1))*[1.,0] + \
361 (whereZero(x[1])*[0.,1.]+whereZero(x[1]-1))*[1.,1]
362 sc.initialize(eta=.1, fixed_u_mask= mask)
363 v=Vector(0.,Solution(dom))
364 v[0]+=whereZero(x[1]-1.)
365 p=Scalar(0.,ReducedSolution(dom))
366 v,p=sc.solve(v,p, verbose=True)
367 saveVTK("u.xml",velocity=v,pressure=p)
368 \end{python}

  ViewVC Help
Powered by ViewVC 1.1.26