/[escript]/trunk/doc/user/stokessolver.tex
ViewVC logotype

Contents of /trunk/doc/user/stokessolver.tex

Parent Directory Parent Directory | Revision Log Revision Log


Revision 2748 - (show annotations)
Tue Nov 17 07:32:59 2009 UTC (9 years, 8 months ago) by gross
File MIME type: application/x-tex
File size: 18815 byte(s)
Macro elements are implemented now. VTK writer for macro elements still needs testing.
1 \section{The Stokes Problem}
2 \label{STOKES PROBLEM}
3 In this section we discuss how to solve the Stokes problem which is defined as follows:
4
5 We want to calculate the velocity \index{velocity} field $v$ and pressure $p$ of an incompressible fluid \index{incompressible fluid}. They are given as the solution of the Stokes problem\index{Stokes problem}
6 \begin{equation}\label{Stokes 1}
7 -\left(\eta(v\hackscore{i,j}+ v\hackscore{i,j})\right)\hackscore{,j}+p\hackscore{,i}=f\hackscore{i}-\sigma\hackscore{ij,j}
8 \end{equation}
9 where $f\hackscore{i}$ defines an internal force \index{force, internal} and $\sigma\hackscore{ij}$ is an intial stress \index{stress, initial}. The viscosity $\eta$ may weakly depend on pressure and velocity. If relevant we will use the notation $\eta(v,p)$ to express this dependency.
10
11 We assume an incompressible media:
12 \begin{equation}\label{Stokes 2}
13 -v\hackscore{i,i}=0
14 \end{equation}
15 Natural boundary conditions are taken in the form
16 \begin{equation}\label{Stokes Boundary}
17 \left(\eta(v\hackscore{i,j}+ v\hackscore{i,j})\right)n\hackscore{j}-n\hackscore{i}p=s\hackscore{i} - \alpha \cdot n\hackscore{i} n\hackscore{j} v\hackscore{j}+\sigma\hackscore{ij} n\hackscore{j}
18 \end{equation}
19 which can be overwritten by constraints of the form
20 \begin{equation}\label{Stokes Boundary0}
21 v\hackscore{i}(x)=v^D\hackscore{i}(x)
22 \end{equation}
23 at some locations $x$ at the boundary of the domain. $s\hackscore{i}$ defines a normal stress and
24 $\alpha\ge 0$ the spring constant for restoring normal force.
25 The index $i$ may depend on the location $x$ on the boundary.
26 $v^D$ is a given function on the domain.
27
28 \subsection{Solution Method \label{STOKES SOLVE}}
29 In block form equation equations~\ref{Stokes 1} and~\ref{Stokes 2} takes the form of a saddle point problem
30 \index{saddle point problem}
31 \begin{equation}
32 \left[ \begin{array}{cc}
33 A & B^{*} \\
34 B & 0 \\
35 \end{array} \right]
36 \left[ \begin{array}{c}
37 v \\
38 p \\
39 \end{array} \right]
40 =\left[ \begin{array}{c}
41 G \\
42 0 \\
43 \end{array} \right]
44 \label{SADDLEPOINT}
45 \end{equation}
46 where $A$ is coercive (assuming $A$ is not depending on $v$ or $p$), self-adjoint linear operator in a suitable Hilbert space, $B$ is the $(-1) \cdot$ divergence operator and $B^{*}$ is it adjoint operator (=gradient operator).
47 For more details on the mathematics see references \cite{AAMIRBERKYAN2008,MBENZI2005}.
48 We use iterative techniques to solve this problem: Given an approximation $v$ and $p$ for
49 velocity and pressure we perform the following steps in the Uzawa scheme \index{Uzawa scheme} style:
50 \begin{enumerate}
51 \item calculate viscosity $\eta(v,p)$ and assemble operator $A$ from $\eta$.
52 \item Solve for $dv$:
53 \begin{equation}
54 A dv = G - A v - B^{*} p \label{SADDLEPOINT ITER STEP 1}
55 \end{equation}
56 \item update $v\hackscore{1}= v+ dv$
57 \item if $\max(\|Bv\hackscore{1}\|\hackscore{0},\|dv\|\hackscore{1}) \le \tau \cdot \|v\hackscore{1}\|\hackscore{1}$: return v$\hackscore{1},p$
58 \item Solve for $dp$:
59 \begin{equation}
60 \begin{array}{rcl}
61 B A^{-1} B^{*} dp & = & Bv\hackscore{1} \\
62 A dv\hackscore{2} & = & B^{*} dp
63 \end{array}
64 \label{SADDLEPOINT ITER STEP 2}
65 \end{equation}
66 \item update $p\hackscore{2}\leftarrow p+ dp$ and $v\hackscore{2}= v\hackscore{1} - dv\hackscore{2}$
67 \item goto Step 0 with $v=v\hackscore{2}$ and $p=p\hackscore{2}$.
68 \end{enumerate}
69 where $\tau$ is the given tolerance. Notice that the operator $A$ if it is depending on $v$ or $p$ is updated once only. In this algorithm
70 \begin{equation}
71 \|v\|\hackscore{1}^2 = \int\hackscore{\Omega} v\hackscore{j,k}v\hackscore{j,k} \; dx
72 \mbox{ and }
73 \|p\|\hackscore{0}^2= \int\hackscore{\Omega} p^2 \; dx.
74 \label{STOKES STOP}
75 \end{equation}
76 so the stopping criterion used is checking for convergence as well as for
77 the fact that the incompressiblity condition~\ref{Stokes 2} is fullfilled.
78
79 To solve the update step~\ref{SADDLEPOINT ITER STEP 2} we use an iterative solver with iteration
80 operator $B A^{-1} B^{*}$, eg. using generalized minimal residual method (GMRES) \index{linear solver!GMRES}\index{GMRES}, preconditioned conjugate gradient method (PCG) \index{linear solver!PCG}\index{PCG}. As suitable preconditioner \index{preconditioner} for this operator is $\frac{1}{\eta}$, ie
81 the evaluation of the preconditioner $P$ for a given pressure increment $q$ is the solution of
82 \begin{equation} \label{P PREC}
83 \frac{1}{\eta} Pq = q \; .
84 \end{equation}
85 Note that in each evaluation of the iteration operator $q=B A^{-1} B^{*} s$ one needs to solve
86 the problem
87 \begin{equation} \label{P OPERATOR}
88 A w = B^{*} s
89 \end{equation}
90 with sufficient accuracy to return $q=Bw$. Notice that the residual $r$ is given as
91 \begin{equation} \label{STOKES RES }
92 r= B (v\hackscore{1} - A^{-1} B^{*} dp) = B (v\hackscore{1} - A^{-1} B^{*} dp) = B (v\hackscore{1}-dv\hackscore{2}) = B v\hackscore{2}
93 \end{equation}
94 so in fact the residual $r$ is represented by the updated velocity $v\hackscore{2}$. This saves the recovery of
95 $dv\hackscore{2}$ in~\ref{SADDLEPOINT ITER STEP 2} after $dp$ has been calculated as iterative method such as PCG calculate the solution approximations along with the their residual. In PCG the iteration is terminated if
96 \begin{equation} \label{P OPERATOR}
97 \| P^{\frac{1}{2}}B v\hackscore{2} \|\hackscore{0} \le \tau\hackscore{2} \| P^{\frac{1}{2}}B v\hackscore{1} \|\hackscore{0}
98 \end{equation}
99 where $\tau\hackscore{2}$ is the given tolerane. The update step~\ref{P OPERATOR} involves the
100 solution of a sparse matrix problem. We use the tolerance $\tau\hackscore{2}^2$ in order to make sure that any
101 error from solving this problem does not interfere with the PCG iteration.
102
103
104
105 We need to think about appropriate stopping criteria when solving
106 \ref{SADDLEPOINT ITER STEP 1}, \ref{SADDLEPOINT ITER STEP 2} and~\ref{P OPERATOR}. We would like to use very weak convergence criteria to reduce computational costs but they need to be tight enough to not interfere with the
107 convergence of the iteration process one level above. From Equation~\ref{SADDLEPOINT ITER STEP 1} we have
108 \begin{equation}
109 v\hackscore{1} = e\hackscore{1} + A^{-1} ( G - B^{*} p )
110 \end{equation}
111 We will use a sparse matrix solver so we have not full control on the norm $\|.\|\hackscore{s}$ used in the stopping criteria
112 \begin{equation}
113 \| G - A v\hackscore{1} - B^{*} p \|\hackscore{s} \le \tau\hackscore{1} \| G - A v - B^{*} p \|\hackscore{s}
114 \end{equation}
115 This translates into the conditoon
116 \begin{equation}
117 \| e\hackscore{1} \|\hackscore{1} \le K \tau\hackscore{1} \| dv\hackscore{1} - e\hackscore{1} \|\hackscore{1}
118 \mbox{ therefore }
119 \| e\hackscore{1} \|\hackscore{1} \le \frac{K \tau\hackscore{1}}{1-K \tau\hackscore{1}} \| dv\hackscore{1} \|\hackscore{1}
120 \end{equation}
121 The constant $K$ represents some uncertainty combining a variety of unknown factors such as the
122 solver being used and the condition number of the stiffness matrix.
123
124 From the first equation of~\ref{SADDLEPOINT ITER STEP 2} we have
125 \begin{equation}
126 p\hackscore{2} = p + (B A^{-1} B^{*})^{-1} (e\hackscore{2} + Bv\hackscore{1} ) =
127 (B A^{-1} B^{*})^{-1} ( e\hackscore{2} + B e\hackscore{1} + B A^{-1} G)
128 \end{equation}
129 and simlar
130 \begin{equation}
131 v\hackscore{2} = v\hackscore{1} - dv\hackscore{2}
132 = v\hackscore{1} - A^{-1} B^{*}dp
133 = e\hackscore{1} + A^{-1} ( G - B^{*} p ) - A^{-1} B^{*} (p\hackscore{2}-p)
134 = e\hackscore{1} + A^{-1} ( G - B^{*} p\hackscore{2})
135 \end{equation}
136 This shows that we can write the iterative process as a fixed point iteration to solve (assume all errors are zero)
137 \begin{equation}
138 v = \Phi(v,p) \mbox{ and } p = \Psi(u,p)
139 \end{equation}
140 where
141 \begin{equation}
142 \begin{array}{rcl}
143 \Psi(v,p) & = & (B A^{-1} B^{*})^{-1} B A^{-1} G \\
144 \Phi(u,p) & = & A^{-1} ( G - B^{*} (B A^{-1} B^{*})^{-1} B A^{-1} G )
145 \end{array}
146 \end{equation}
147 Notice that if $A$ is independent from $v$ and $p$ the operators $\Phi(v,p)$ and $\Psi(u,p)$ are constant
148 and threfore the iteration will terminate - providing no termination errors in sub-iterations - after one step.
149 We also can give a formula for the error
150 \begin{equation}
151 \begin{array}{rcl}
152 \delta p & = & (B A^{-1} B^{*})^{-1} ( e\hackscore{2} + B e\hackscore{1} ) \\
153 \delta v & = & e\hackscore{1} - A^{-1} B^{*}\delta p
154 \end{array}\label{STOKES ERRORS}
155 \end{equation}
156 Notice that $B\delta v = - e\hackscore{2}=-Bv\hackscore{2}$
157 With this notation
158 \begin{equation}
159 \begin{array}{rcl}
160 v\hackscore{2} = \Phi(v,p) + \delta v \\
161 p\hackscore{2} = \Psi(v,p) + \delta p \\
162 \end{array}
163 \end{equation}
164 We use the $dv=v\hackscore{2}-v$ and $Bv\hackscore{1}=B A^{-1} B^{*} dp$ to measure the error of the
165 current approximation $v\hackscore{2}$ and $v\hackscore{2}$ towards the exact solution.
166 Assuming that the iteration does converge with a convergence rate $\chi^{-}$ we have the estimate
167 \begin{equation}
168 \max(\|dv\|\hackscore{1}, \|Bv\hackscore{1}\|\hackscore{0})
169 \le \chi^{-} \max(\|dv^{-}\|\hackscore{1}, \|Bv\hackscore{1}^{-}\|\hackscore{0})
170 + \max(\|\delta v\|\hackscore{1} + \|(B A^{-1} B^{*}) \delta p\|\hackscore{0})
171 \end{equation}
172 were the upper index $-$ referes to the increments in the last step.
173 Now we try to establish estimates for $\|\delta v\|$ and $\|Bv\hackscore{1}\|$ from
174 formulas~\ref{STOKES ERRORS} where neglect the $\delta p$ in the equation for $\delta v$ assuming that
175 $\delta p$ is controlled.
176 \begin{equation}
177 \|\delta v\|\hackscore{1} \approx \|e\hackscore{1}\|\hackscore{1} \le \frac{K \tau\hackscore{1}}{1-K \tau\hackscore{1}} \| dv\hackscore{1} \|\hackscore{1} \approx \frac{K \tau\hackscore{1}}{1-K \tau\hackscore{1}} \| dv\|\hackscore{1}
178 \end{equation}
179 and
180 \begin{equation}
181 \|(B A^{-1} B^{*}) \delta p\|\hackscore{0} \approx \| e\hackscore{2} \|\hackscore{0}
182 = \| B v\hackscore{2} \|\hackscore{0} \le M \tau\hackscore{2} \| B v\hackscore{1} \|\hackscore{0}
183 \end{equation}
184 which leads to
185 \begin{equation}
186 \max(\|dv\|\hackscore{1}, \|Bv\hackscore{1}\|\hackscore{0})
187 \le \frac{\chi^{-}}{1-max{(M \tau\hackscore{2}, \frac{K \tau\hackscore{1}}{1-K \tau\hackscore{1}})}} \max(\|dv^{-}\|\hackscore{1}, \|Bv\hackscore{1}^{-}\|\hackscore{0}) \label{STOKES ESTIMTED IMPROVEMENT}
188 \end{equation}
189 where the upper index $-$ refers to the previous iteration step.
190 If we allow require $max{(M \tau\hackscore{2}, \frac{K \tau\hackscore{1}}{1-K \tau\hackscore{1}})}\le \gamma$
191 with a given $0<\gamma<1$ we can set
192 \begin{equation} \label{STOKES SET TOL}
193 \tau\hackscore{2} = \frac{\gamma}{M} \mbox{ and } \tau\hackscore{1} = \frac{1}{K} \frac{\gamma}{1+\gamma}
194 \end{equation}
195 Problem is that we do not know $M$ and $K$ but can use the convergence rate
196 \begin{equation}
197 \chi := \frac{\max(\|dv\|\hackscore{1}, \|Bv\hackscore{1}\|\hackscore{0})}{\max(\|dv^{-}\|\hackscore{1}, \|Bv\hackscore{1}^{-}\|\hackscore{0})}
198 \end{equation}
199 to construct estimates of $M$ and $K$. We look at the two cases where our prediction and choice of
200 the tolerances was good or where we went wrong:
201 \begin{equation}
202 \chi \le \frac{\chi^{-}}{1-\gamma} \mbox{ or }
203 \chi = \frac{\chi^{-}}{1-\gamma\hackscore{0}} >\frac{\chi^{-}}{1-\gamma}.
204 \end{equation}
205 which translates to
206 \begin{equation}
207 \gamma\hackscore{0} \le \gamma \mbox{ or } \gamma\hackscore{0}=1-\frac{\chi^{-}}{ \chi}>\gamma>0
208 \end{equation}
209 In the second case use \ref{STOKES SET TOL} for $\gamma=\gamma\hackscore{0}$ to get new estimates $M^{+}$ and $K^{+}$
210 for $M$ and $K$:
211 \begin{equation} \label{TOKES CONST UPDATE}
212 M^{+} =
213 \frac{\gamma\hackscore{0}}{\gamma} M
214 \mbox{ and } K^{+} =
215 \frac{\gamma\hackscore{0}}{\gamma}
216 \frac{1+\gamma}{1+\gamma\hackscore{0}}
217 K
218 \mbox{ with } \gamma\hackscore{0}=\max(1-\frac{\chi^{-}}{ \chi},\gamma)
219 \end{equation}
220 With these updated constants we can now get better values for the tolerances via an updated value $\gamma^{+}$ for $\gamma$ and the corrected values $M^{+}$ and $K^{+}$ for $M$ and $K$. If we are in the case of convergence which we
221 meassure by
222 \begin{equation}
223 \chi < \chi\hackscore{max}
224 \end{equation}
225 where $\chi\hackscore{max}$ is given value with $0<\chi\hackscore{max}<1$. We then consider the following
226 criteria
227 \begin{itemize}
228 \item As we are in case of convergence we try to relax the tolerances by increasing $\gamma$ by a factor $\frac{1}{\omega}$ ($0<\omega<1$). So we would like to choose
229 $\gamma^{+} \ge \frac{\gamma}{\omega}$.
230 \item We need to make sure that the estimated convergence rate based on $\gamma^{+}$ will still maintain convergence
231 \begin{equation}
232 \frac{\chi}{1-\gamma^{+}} \le \chi\hackscore{max}
233 \end{equation}
234 which translates into
235 \begin{equation}
236 \gamma^{+} \le 1 - \frac{\chi}{\chi\hackscore{max}}
237 \end{equation}
238 Notice that the right hand side is positive.
239 \item We do not want to iterate more then necessary. So we would like reduce the error in the next just below the
240 desired tolerance:
241 \begin{equation}
242 \frac{\chi}{1-\gamma^{+}}\max(\|dv\|\hackscore{1}, \|Bv\hackscore{1}\|\hackscore{0}) \ge f \cdot \tau \|v\hackscore{2}\|\hackscore{1}
243 \end{equation}
244 where the left hand side provides an estimate for the error to prediced in the next step. $f$ is a
245 safty factor. This leads to
246 \begin{equation}
247 \gamma^{+} \le
248 1 -
249 \chi \frac{\max(\|dv\|\hackscore{1}, \|Bv\hackscore{1}\|\hackscore{0})} { f \cdot \tau \|v\hackscore{2}\|\hackscore{1}}
250 \end{equation}
251 \end{itemize}
252 Putting all these criteria together we get
253 \begin{equation}
254 \gamma^{+}=\min(\max(
255 \frac{\gamma}{\omega},
256 1 -
257 \chi \frac{\max(\|dv\|\hackscore{1}, \|Bv\hackscore{1}\|\hackscore{0})} { f \cdot \tau \|v\hackscore{2}\|\hackscore{1}}), 1 - \frac{\chi}{\chi\hackscore{max}})
258 \label{STOKES SET GAMMA PLUS}
259 \end{equation}
260 In case we cannot see convergence $\gamma$ is reduced by the factor $\omega$
261 \begin{equation}
262 \gamma^{+}=\max(\omega \cdot \gamma, \gamma\hackscore{min})\label{STOKES SET GAMMA PLUS2}
263 \end{equation}
264 where $\gamma\hackscore{min}$ is introduced to make sure that $\gamma^{+}$ stays away from zero.
265
266
267 Appling~\ref{STOKES SET TOL} for $\gamma^{+}$ and~\ref{TOKES CONST UPDATE} we get the update formula
268 \begin{equation} \label{STOKES CONST UPDATE}
269 \tau\hackscore{2}^{+} =
270 \frac{\gamma^{+}}{\gamma\hackscore{0}} \tau\hackscore{2}
271 \mbox{ and } \tau\hackscore{1}^{+} =
272 \frac{\gamma^{+}}{\gamma\hackscore{0}}
273 \frac{1+\gamma\hackscore{0}}{1+\gamma^{+}}
274 \tau\hackscore{1}
275 \end{equation}
276 to get the new tolerances $\tau\hackscore{1}^{+}$ and $\tau\hackscore{2}^{+}$ to be used in the next iteration step.
277 Notice that the coefficients $M$ and $K$ have been eliminated. The calculation of $\gamma\hackscore{0}$ requires
278 at least two iteration steps (or a previous time step).
279
280 Typical initial values are
281 \begin{equation} \label{STOKES VALUES}
282 \tau\hackscore{1}=\tau\hackscore{1}=\frac{1}{100} ; \; \omega=\frac{1}{2} \; \gamma=\frac{1}{10} ;\gamma\hackscore{min}=10^{-6}
283 \end{equation}
284 where after the first step we set $\gamma\hackscore{0}=\gamma$ as no $\chi^{-}$ is available.
285
286
287 \subsection{Functions}
288
289 \begin{classdesc}{StokesProblemCartesian}{domain}
290 opens the Stokes problem\index{Stokes problem} on the \Domain domain. The approximation
291 order needs to be two. Order two will be used to approximate the velocity and order one
292 to approximate the pressure.
293 \end{classdesc}
294
295 \begin{methoddesc}[StokesProblemCartesian]{initialize}{\optional{f=Data(), \optional{fixed_u_mask=Data(), \optional{eta=1, \optional{surface_stress=Data(), \optional{stress=Data()}, \optional{
296 restoration_factor=0}}}}}}
297 assigns values to the model parameters. In any call all values must be set.
298 \var{f} defines the external force $f$, \var{eta} the viscosity $\eta$,
299 \var{surface_stress} the surface stress $s$ and \var{stress} the initial stress $\sigma$.
300 The locations and compontents where the velocity is fixed are set by
301 the values of \var{fixed_u_mask}. \var{restoration_factor} defines the restoring force factor $\alpha$.
302 The method will try to cast the given values to appropriate
303 \Data class objects.
304 \end{methoddesc}
305
306 \begin{methoddesc}[StokesProblemCartesian]{solve}{v,p
307 \optional{, max_iter=100 \optional{, verbose=False \optional{, usePCG=True }}}}
308 solves the problem and return approximations for velocity and pressure.
309 The arguments \var{v} and \var{p} define initial guess. The values of \var{v} marked
310 by \var{fixed_u_mask} remain unchanged.
311 If \var{usePCG} is set to \True
312 reconditioned conjugate gradient method (PCG) \index{preconditioned conjugate gradient method!PCG} scheme is used. Otherwise the problem is solved generalized minimal residual method (GMRES) \index{generalized minimal residual method!GMRES}. In most cases
313 the PCG scheme is more efficient.
314 \var{max_iter} defines the maximum number of iteration steps.
315
316 If \var{verbose} is set to \True informations on the progress of of the solver are printed.
317 \end{methoddesc}
318
319
320 \begin{methoddesc}[StokesProblemCartesian]{setTolerance}{\optional{tolerance=1.e-4}}
321 sets the tolerance in an appropriate norm relative to the right hand side. The tolerance must be non-negative and less than 1.
322 \end{methoddesc}
323 \begin{methoddesc}[StokesProblemCartesian]{getTolerance}{}
324 returns the current relative tolerance.
325 \end{methoddesc}
326 \begin{methoddesc}[StokesProblemCartesian]{setAbsoluteTolerance}{\optional{tolerance=0.}}
327 sets the absolute tolerance for the error in the relevant norm. The tolerance must be non-negative. Typically the
328 absolute talerance is set to 0.
329 \end{methoddesc}
330
331 \begin{methoddesc}[StokesProblemCartesian]{getAbsoluteTolerance}{}
332 sreturns the current absolute tolerance.
333 \end{methoddesc}
334
335 \begin{methoddesc}[StokesProblemCartesian]{getSolverOptionsVelocity}{}
336 returns the solver options used solve the equations~(\ref{V CALC}) for velocity.
337 \end{methoddesc}
338
339 \begin{methoddesc}[StokesProblemCartesian]{getSolverOptionsPressure}{}
340 returns the solver options used solve the equation~(\ref{P PREC}) for pressure.
341 \end{methoddesc}
342
343 \begin{methoddesc}[StokesProblemCartesian]{getSolverOptionsDiv}{}
344 set the solver options for solving the equation to project the divergence of the velocity onto the function space of pressure.
345 \end{methoddesc}
346
347
348 \subsection{Example: Lit Driven Cavity}
349 The following script \file{lit\hackscore driven\hackscore cavity.py}
350 \index{scripts!\file{helmholtz.py}} which is available in the \ExampleDirectory
351 illustrates the usage of the \class{StokesProblemCartesian} class to solve
352 the lit driven cavity problem:
353 \begin{python}
354 from esys.escript import *
355 from esys.finley import Rectangle
356 from esys.escript.models import StokesProblemCartesian
357 NE=25
358 dom = Rectangle(NE,NE,order=2)
359 x = dom.getX()
360 sc=StokesProblemCartesian(dom)
361 mask= (whereZero(x[0])*[1.,0]+whereZero(x[0]-1))*[1.,0] + \
362 (whereZero(x[1])*[0.,1.]+whereZero(x[1]-1))*[1.,1]
363 sc.initialize(eta=.1, fixed_u_mask= mask)
364 v=Vector(0.,Solution(dom))
365 v[0]+=whereZero(x[1]-1.)
366 p=Scalar(0.,ReducedSolution(dom))
367 v,p=sc.solve(v,p, verbose=True)
368 saveVTK("u.xml",velocity=v,pressure=p)
369 \end{python}

  ViewVC Help
Powered by ViewVC 1.1.26