/[escript]/trunk/doc/user/pyvisi.tex
ViewVC logotype

Contents of /trunk/doc/user/pyvisi.tex

Parent Directory Parent Directory | Revision Log Revision Log


Revision 1211 - (show annotations)
Wed Jul 4 05:16:46 2007 UTC (12 years, 7 months ago) by jongui
File MIME type: application/x-tex
File size: 50393 byte(s)
- Fixed some minor bugs.
- Added a new feature called 'MapOnScalarClipWithRotation' that clips a two-dimensional data with a scalar value and subsequently rotating it to create a three-dimensional effect.
- Corresponding examples, test cases and documentation of the new module have also been added.
1 \chapter{The module \pyvisi}
2 \label{PYVISI CHAP}
3 \declaremodule{extension}{esys.pyvisi}
4 \modulesynopsis{Python Visualization Interface}
5
6 \section{Introduction}
7 \pyvisi is a Python module that is used to generate 2D and 3D visualization
8 for escript and its PDE solvers: finley and bruce. This module provides
9 an easy to use interface to the \VTK library (\VTKUrl). Pyvisi can be used to
10 render (generate) surface maps and contours for scalar fields, arrows and
11 streamlines for vector fields, and ellipsoids for tensor fields.
12 There are three
13 approaches for rendering an object. (1) Online - object is rendered on-screen
14 with interaction capability (i.e. zoom and rotate), (2) Offline - object is
15 rendered off-screen (no pop-up window) and (3) Display - object is rendered
16 on-screen but with no interaction capability (on-the-fly
17 animation). All three approaches have the option to save the rendered object
18 as an image (i.e. jpg).
19
20 The following outlines the general steps to use Pyvisi:
21
22 \begin{enumerate}
23 \item Create a \Scene instance - a window in which objects are to be
24 rendered on.
25 \item Create a data input instance (i.e. \DataCollector or \ImageReader) -
26 reads and loads the source data for visualization.
27 \item Create a data visualization instance (i.e. \Map, \Velocity, \Ellipsoid,
28 \Contour, \Carpet, \StreamLine or \Image) - proccesses and manipulates
29 the source data.
30 \item Create a \Camera or \Light instance - controls the viewing angle and
31 lighting effects.
32 \item Render the object - using either the Online, Offline or Display approach.
33 \end{enumerate}
34 \begin{center}
35 \begin{math}
36 scene \rightarrow data \; input \rightarrow data \; visualization \rightarrow
37 camera \, / \, light \rightarrow render
38 \end{math}
39 \end{center}
40
41 \section{\pyvisi Classes}
42 The following subsections give a brief overview of the important classes
43 and some of their corresponding methods. Please refer to \ReferenceGuide for
44 full details.
45
46
47 %#############################################################################
48
49
50 \subsection{Scene Classes}
51 This subsection details the instances used to setup the viewing environment.
52
53 \subsubsection{\Scene class}
54
55 \begin{classdesc}{Scene}{renderer = Renderer.ONLINE, num_viewport = 1,
56 x_size = 1152, y_size = 864}
57 A scene is a window in which objects are to be rendered on. Only
58 one scene needs to be created. However, a scene may be divided into four
59 smaller windows called viewports (if needed). Each viewport in turn can
60 render a different object.
61 \end{classdesc}
62
63 The following are some of the methods available:
64 \begin{methoddesc}[Scene]{setBackground}{color}
65 Set the background color of the scene.
66 \end{methoddesc}
67
68 \begin{methoddesc}[Scene]{render}{image_name = None}
69 Render the object using either the Online, Offline or Display mode.
70 \end{methoddesc}
71
72 \subsubsection{\Camera class}
73
74 \begin{classdesc}{Camera}{scene, viewport = Viewport.SOUTH_WEST}
75 A camera controls the display angle of the rendered object and one is
76 usually created for a \Scene. However, if a \Scene has four viewports, then a
77 separate camera may be created for each viewport.
78 \end{classdesc}
79
80 The following are some of the methods available:
81 \begin{methoddesc}[Camera]{setFocalPoint}{position}
82 Set the focal point of the camera.
83 \end{methoddesc}
84
85 \begin{methoddesc}[Camera]{setPosition}{position}
86 Set the position of the camera.
87 \end{methoddesc}
88
89 \begin{methoddesc}[Camera]{azimuth}{angle}
90 Rotate the camera to the left and right.
91 \end{methoddesc}
92
93 \begin{methoddesc}[Camera]{elevation}{angle}
94 Rotate the camera to the top and bottom (only between -90 and 90).
95 \end{methoddesc}
96
97 \begin{methoddesc}[Camera]{backView}{}
98 Rotate the camera to view the back of the rendered object.
99 \end{methoddesc}
100
101 \begin{methoddesc}[Camera]{topView}{}
102 Rotate the camera to view the top of the rendered object.
103 \end{methoddesc}
104
105 \begin{methoddesc}[Camera]{bottomView}{}
106 Rotate the camera to view the bottom of the rendered object.
107 \end{methoddesc}
108
109 \begin{methoddesc}[Camera]{leftView}{}
110 Rotate the camera to view the left side of the rendered object.
111 \end{methoddesc}
112
113 \begin{methoddesc}[Camera]{rightView}{}
114 Rotate the camera to view the right side of the rendered object.
115 \end{methoddesc}
116
117 \begin{methoddesc}[Camera]{isometricView}{}
118 Rotate the camera to view the isometric angle of the rendered object.
119 \end{methoddesc}
120
121 \begin{methoddesc}[Camera]{dolly}{distance}
122 Move the camera towards (greater than 1) the rendered object. However,
123 the camera is unable to be moved away from the rendered object.
124 \end{methoddesc}
125
126 \subsubsection{\Light class}
127
128 \begin{classdesc}{Light}{scene, viewport = Viewport.SOUTH_WEST}
129 A light controls the lighting effect for the rendered object and works in
130 a similar way to \Camera.
131 \end{classdesc}
132
133 The following are some of the methods available:
134 \begin{methoddesc}[Light]{setColor}{color}
135 Set the light color.
136 \end{methoddesc}
137
138 \begin{methoddesc}[Light]{setFocalPoint}{position}
139 Set the focal point of the light.
140 \end{methoddesc}
141
142 \begin{methoddesc}[Light]{setPosition}{position}
143 Set the position of the light.
144 \end{methoddesc}
145
146 \begin{methoddesc}[Light]{setAngle}{elevation = 0, azimuth = 0}
147 An alternative to set the position and focal point of the light by using the
148 elevation and azimuth.
149 \end{methoddesc}
150
151
152 %##############################################################################
153
154
155 \subsection{Input Classes}
156 \label{INPUT SEC}
157 This subsection details the instances used to read and load the source data
158 for visualization.
159
160 \subsubsection{\DataCollector class}
161 \begin{classdesc}{DataCollector}{source = Source.XML}
162 A data collector is used to read data either from a XML file (using
163 \texttt{setFileName()}) or from an escript object directly (using
164 \texttt{setData()}). Writing XML files are expensive, but this approach has
165 the advantage given that the results can be analyzed easily after the
166 simulation has completed.
167 \end{classdesc}
168
169 The following are some of the methods available:
170 \begin{methoddesc}[DataCollector]{setFileName}{file_name}
171 Set the XML file name to read.
172 \end{methoddesc}
173
174 \begin{methoddesc}[DataCollector]{setData}{**args}
175 Create data using the \textless name\textgreater=\textless data\textgreater
176 pairing. Assumption is made that the data will be given in the
177 appropriate format.
178 \end{methoddesc}
179
180 \begin{methoddesc}[DataCollector]{setActiveScalar}{scalar}
181 Specify the scalar field to load.
182 \end{methoddesc}
183
184 \begin{methoddesc}[DataCollector]{setActiveVector}{vector}
185 Specify the vector field to load.
186 \end{methoddesc}
187
188 \begin{methoddesc}[DataCollector]{setActiveTensor}{tensor}
189 Specify the tensor field to load.
190 \end{methoddesc}
191
192 \subsubsection{\ImageReader class}
193
194 \begin{classdesc}{ImageReader}{format}
195 An image reader is used to read data from an image in a variety of formats.
196 \end{classdesc}
197
198 The following are some of the methods available:
199 \begin{methoddesc}[ImageReader]{setImageName}{image_name}
200 Set the image name to be read.
201 \end{methoddesc}
202
203 \subsubsection{\TextTwoD class}
204
205 \begin{classdesc}{Text2D}{scene, text, viewport = Viewport.SOUTH_WEST}
206 A two-dimensional text is used to annotate the rendered object
207 (i.e. inserting titles, authors and labels).
208 \end{classdesc}
209
210 The following are some of the methods available:
211 \begin{methoddesc}[Text2D]{setFontSize}{size}
212 Set the 2D text size.
213 \end{methoddesc}
214
215 \begin{methoddesc}[Text2D]{boldOn}{}
216 Bold the 2D text.
217 \end{methoddesc}
218
219 \begin{methoddesc}[Text2D]{setColor}{color}
220 Set the color of the 2D text.
221 \end{methoddesc}
222
223 Including methods from \ActorTwoD.
224
225
226 %##############################################################################
227
228
229 \subsection{Data Visualization Classes}
230 \label{DATAVIS SEC}
231 This subsection details the instances used to process and manipulate the source
232 data. The typical usage of some of the classes are also shown.
233
234 One point to note is that the source can either be point or cell data. If the
235 source is cell data, a conversion to point data may or may not be
236 required, in order for the object to be rendered correctly.
237 If a conversion is needed, the 'cell_to_point' flag (see below) must
238 be set to 'True', otherwise 'False' (which is the default). On occasions, an
239 inaccurate object may be rendered from cell data even after conversion.
240
241 \subsubsection{\Map class}
242
243 \begin{classdesc}{Map}{scene, data_collector,
244 viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False,
245 outline = True}
246 Class that shows a scalar field on a domain surface. The domain surface
247 can either be colored or grey-scaled, depending on the lookup table used.
248 \end{classdesc}
249
250 The following are some of the methods available:\\
251 Methods from \ActorThreeD and \DataSetMapper.
252
253 A typical usage of \Map is shown below.
254
255 \begin{python}
256 """
257 Author: John Ngui, john.ngui@uq.edu.au
258 """
259
260 # Import the necessary modules.
261 from esys.pyvisi import Scene, DataCollector, Map, Camera
262 from esys.pyvisi.constant import *
263 import os
264
265 PYVISI_EXAMPLE_MESHES_PATH = "data_meshes"
266 PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images"
267 X_SIZE = 800
268 Y_SIZE = 800
269
270 SCALAR_FIELD_POINT_DATA = "temperature"
271 SCALAR_FIELD_CELL_DATA = "temperature_cell"
272 FILE_3D = "interior_3D.xml"
273 IMAGE_NAME = "map.jpg"
274 JPG_RENDERER = Renderer.ONLINE_JPG
275
276 # Create a Scene with four viewports.
277 s = Scene(renderer = JPG_RENDERER, num_viewport = 4, x_size = X_SIZE,
278 y_size = Y_SIZE)
279
280 # Create a DataCollector reading from a XML file.
281 dc1 = DataCollector(source = Source.XML)
282 dc1.setFileName(file_name = os.path.join(PYVISI_EXAMPLE_MESHES_PATH, FILE_3D))
283 dc1.setActiveScalar(scalar = SCALAR_FIELD_POINT_DATA)
284
285 # Create a Map for the first viewport.
286 m1 = Map(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST,
287 lut = Lut.COLOR, cell_to_point = False, outline = True)
288 m1.setRepresentationToWireframe()
289
290 # Create a Camera for the first viewport
291 c1 = Camera(scene = s, viewport = Viewport.SOUTH_WEST)
292 c1.isometricView()
293
294 # Create a second DataCollector reading from the same XML file but specifying
295 # a different scalar field.
296 dc2 = DataCollector(source = Source.XML)
297 dc2.setFileName(file_name = os.path.join(PYVISI_EXAMPLE_MESHES_PATH, FILE_3D))
298 dc2.setActiveScalar(scalar = SCALAR_FIELD_CELL_DATA)
299
300 # Create a Map for the third viewport.
301 m2 = Map(scene = s, data_collector = dc2, viewport = Viewport.NORTH_EAST,
302 lut = Lut.COLOR, cell_to_point = True, outline = True)
303
304 # Create a Camera for the third viewport
305 c2 = Camera(scene = s, viewport = Viewport.NORTH_EAST)
306
307 # Render the object.
308 s.render(image_name = os.path.join(PYVISI_EXAMPLE_IMAGES_PATH, IMAGE_NAME))
309 \end{python}
310
311 \subsubsection{\MapOnPlaneCut class}
312
313 \begin{classdesc}{MapOnPlaneCut}{scene, data_collector,
314 viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False,
315 outline = True}
316 This class works in a similar way to \Map, except that it shows a scalar
317 field cut using a plane. The plane can be translated and rotated along the
318 X, Y and Z axes.
319 \end{classdesc}
320
321 The following are some of the methods available:\\
322 Methods from \ActorThreeD, \Transform and \DataSetMapper.
323
324 \subsubsection{\MapOnPlaneClip class}
325
326 \begin{classdesc}{MapOnPlaneClip}{scene, data_collector,
327 viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False,
328 outline = True}
329 This class works in a similar way to \MapOnPlaneCut, except that it shows a
330 scalar field clipped using a plane.
331 \end{classdesc}
332
333 The following are some of the methods available:\\
334 Methods from \ActorThreeD, \Transform, \Clipper and \DataSetMapper.
335
336 \subsubsection{\MapOnScalarClip class}
337
338 \begin{classdesc}{MapOnScalarClip}{scene, data_collector,
339 viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False,
340 outline = True}
341 This class works in a similar way to \Map, except that it shows a scalar
342 field clipped using a scalar value.
343 \end{classdesc}
344
345 The following are some of the methods available:\\
346 Methods from \ActorThreeD, \Clipper and \DataSetMapper.
347
348 \subsubsection{\MapOnScalarClipWithRotation class}
349
350 \begin{classdesc}{MapOnScalarClipWithRotation}{scene, data_collector,
351 viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False}
352 This class works in a similar way to \Map except that it
353 shows a 2D scalar field clipped using a scalar value and subsequently
354 rotated around the z-axis to create a 3D looking effect. This class should
355 only be used with 2D data sets and NOT 3D.
356 \end{classdesc}
357
358 The following are some of the methods available:\\
359 Methods from \ActorThreeD, \Clipper, \Rotation and \DataSetMapper.
360
361 \subsubsection{\Velocity class}
362
363 \begin{classdesc}{Velocity}{scene, data_collector, arrow = Arrow.TWO_D,
364 color_mode = ColorMode.VECTOR, viewport = Viewport.SOUTH_WEST,
365 lut = Lut.COLOR, cell_to_point = False, outline = True}
366 Class that shows a vector field using arrows. The arrows can either be
367 colored or grey-scaled, depending on the lookup table used. If the arrows
368 are colored, there are two possible coloring modes, either using vector data or
369 scalar data. Similarly, there are two possible types of arrows, either
370 using two-dimensional or three-dimensional.
371 \end{classdesc}
372
373 The following are some of the methods available:\\
374 Methods from \ActorThreeD, \GlyphThreeD, \MaskPoints and \DataSetMapper.
375
376 \subsubsection{\VelocityOnPlaneCut class}
377
378 \begin{classdesc}{VelocityOnPlaneCut}{scene, data_collector,
379 arrow = Arrow.TWO_D, color_mode = ColorMode.VECTOR,
380 viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR,
381 cell_to_point = False, outline = True}
382 This class works in a similar way to \MapOnPlaneCut, except that
383 it shows a vector field using arrows cut using a plane.
384 \end{classdesc}
385
386 The following are some of the methods available:\\
387 Methods from \ActorThreeD, \GlyphThreeD, \Transform, \MaskPoints and
388 \DataSetMapper.
389
390 A typical usage of \VelocityOnPlaneCut is shown below.
391
392 \begin{python}
393 """
394 Author: John Ngui, john.ngui@uq.edu.au
395 """
396
397 # Import the necessary modules
398 from esys.pyvisi import Scene, DataCollector, VelocityOnPlaneCut, Camera
399 from esys.pyvisi.constant import *
400 import os
401
402 PYVISI_EXAMPLE_MESHES_PATH = "data_meshes"
403 PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images"
404 X_SIZE = 400
405 Y_SIZE = 400
406
407 VECTOR_FIELD_CELL_DATA = "velocity"
408 FILE_3D = "interior_3D.xml"
409 IMAGE_NAME = "velocity.jpg"
410 JPG_RENDERER = Renderer.ONLINE_JPG
411
412 # Create a Scene.
413 s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE,
414 y_size = Y_SIZE)
415
416 # Create a DataCollector reading from a XML file.
417 dc1 = DataCollector(source = Source.XML)
418 dc1.setFileName(file_name = os.path.join(PYVISI_EXAMPLE_MESHES_PATH, FILE_3D))
419 dc1.setActiveVector(vector = VECTOR_FIELD_CELL_DATA)
420
421 # Create VelocityOnPlaneCut.
422 vopc1 = VelocityOnPlaneCut(scene = s, data_collector = dc1,
423 viewport = Viewport.SOUTH_WEST, color_mode = ColorMode.VECTOR,
424 arrow = Arrow.THREE_D, lut = Lut.COLOR, cell_to_point = False,
425 outline = True)
426 vopc1.setScaleFactor(scale_factor = 0.5)
427 vopc1.setPlaneToXY(offset = 0.5)
428 vopc1.setRatio(2)
429 vopc1.randomOn()
430
431 # Create a Camera.
432 c1 = Camera(scene = s, viewport = Viewport.SOUTH_WEST)
433 c1.isometricView()
434 c1.elevation(angle = -20)
435
436 # Render the object.
437 s.render(image_name = os.path.join(PYVISI_EXAMPLE_IMAGES_PATH, IMAGE_NAME))
438 \end{python}
439
440 \subsubsection{\VelocityOnPlaneClip class}
441
442 \begin{classdesc}{VelocityOnPlaneClip}{scene, data_collector,
443 arrow = Arrow.TWO_D, color_mode = ColorMode.VECTOR,
444 viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR,
445 cell_to_point = False, online = True}
446 This class works in a similar way to \MapOnPlaneClip, except that it shows a
447 vector field using arrows clipped using a plane.
448 \end{classdesc}
449
450 The following are some of the methods available:\\
451 Methods from \ActorThreeD, \GlyphThreeD, \Transform, \Clipper,
452 \MaskPoints and \DataSetMapper.
453
454 \subsubsection{\Ellipsoid class}
455
456 \begin{classdesc}{Ellipsoid}{scene, data_collector,
457 viewport = Viewport = SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False,
458 outline = True}
459 Class that shows a tensor field using ellipsoids. The ellipsoids can either be
460 colored or grey-scaled, depending on the lookup table used.
461 \end{classdesc}
462
463 The following are some of the methods available:\\
464 Methods from \ActorThreeD, \Sphere, \TensorGlyph, \MaskPoints and
465 \DataSetMapper.
466
467 \subsubsection{\EllipsoidOnPlaneCut class}
468
469 \begin{classdesc}{EllipsoidOnPlaneCut}{scene, data_collector,
470 viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False,
471 outline = True}
472 This class works in a similar way to \MapOnPlaneCut, except that it shows
473 a tensor field using ellipsoids cut using a plane.
474 \end{classdesc}
475
476 The following are some of the methods available:\\
477 Methods from \ActorThreeD, \Sphere, \TensorGlyph, \Transform,
478 \MaskPoints and \DataSetMapper.
479
480 \subsubsection{\EllipsoidOnPlaneClip class}
481
482 \begin{classdesc}{EllipsoidOnPlaneClip}{scene, data_collector,
483 viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False,
484 outline = True}
485 This class works in a similar way to \MapOnPlaneClip, except that it shows a
486 tensor field using ellipsoids clipped using a plane.
487 \end{classdesc}
488
489 The following are some of the methods available:\\
490 Methods from \ActorThreeD, \Sphere, \TensorGlyph, \Transform, \Clipper,
491 \MaskPoints and \DataSetMapper.
492
493 A typical usage of \EllipsoidOnPlaneClip is shown below.
494
495 \begin{python}
496 """
497 Author: John Ngui, john.ngui@uq.edu.au
498 """
499
500 # Import the necessary modules
501 from esys.pyvisi import Scene, DataCollector, EllipsoidOnPlaneClip, Camera
502 from esys.pyvisi.constant import *
503 import os
504
505 PYVISI_EXAMPLE_MESHES_PATH = "data_meshes"
506 PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images"
507 X_SIZE = 400
508 Y_SIZE = 400
509
510 TENSOR_FIELD_CELL_DATA = "stress_cell"
511 FILE_3D = "interior_3D.xml"
512 IMAGE_NAME = "ellipsoid.jpg"
513 JPG_RENDERER = Renderer.ONLINE_JPG
514
515 # Create a Scene.
516 s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE,
517 y_size = Y_SIZE)
518
519 # Create a DataCollector reading from a XML file.
520 dc1 = DataCollector(source = Source.XML)
521 dc1.setFileName(file_name = os.path.join(PYVISI_EXAMPLE_MESHES_PATH, FILE_3D))
522 dc1.setActiveTensor(tensor = TENSOR_FIELD_CELL_DATA)
523
524 # Create an EllipsoidOnPlaneClip.
525 eopc1 = EllipsoidOnPlaneClip(scene = s, data_collector = dc1,
526 viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = True,
527 outline = True)
528 eopc1.setPlaneToXY()
529 eopc1.setScaleFactor(scale_factor = 0.2)
530 eopc1.rotateX(angle = 10)
531
532 # Create a Camera.
533 c1 = Camera(scene = s, viewport = Viewport.SOUTH_WEST)
534 c1.bottomView()
535 c1.azimuth(angle = -90)
536 c1.elevation(angle = 10)
537
538 # Render the object.
539 s.render(image_name = os.path.join(PYVISI_EXAMPLE_IMAGES_PATH, IMAGE_NAME))
540 \end{python}
541
542 \subsubsection{\Contour class}
543
544 \begin{classdesc}{Contour}{scene, data_collector,
545 viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False,
546 outline = True}
547 Class that shows a scalar field using contour surfaces. The contour surfaces can
548 either be colored or grey-scaled, depending on the lookup table used. This
549 class can also be used to generate iso surfaces.
550 \end{classdesc}
551
552 The following are some of the methods available:\\
553 Methods from \ActorThreeD, \ContourModule and \DataSetMapper.
554
555 A typical usage of \Contour is shown below.
556
557 \begin{python}
558 """
559 Author: John Ngui, john.ngui@uq.edu.au
560 """
561
562 # Import the necessary modules
563 from esys.pyvisi import Scene, DataCollector, Contour, Camera
564 from esys.pyvisi.constant import *
565 import os
566
567 PYVISI_EXAMPLE_MESHES_PATH = "data_meshes"
568 PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images"
569 X_SIZE = 400
570 Y_SIZE = 400
571
572 SCALAR_FIELD_POINT_DATA = "temperature"
573 FILE_3D = "interior_3D.xml"
574 IMAGE_NAME = "contour.jpg"
575 JPG_RENDERER = Renderer.ONLINE_JPG
576
577 # Create a Scene.
578 s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE,
579 y_size = Y_SIZE)
580
581 # Create a DataCollector reading a XML file.
582 dc1 = DataCollector(source = Source.XML)
583 dc1.setFileName(file_name = os.path.join(PYVISI_EXAMPLE_MESHES_PATH, FILE_3D))
584 dc1.setActiveScalar(scalar = SCALAR_FIELD_POINT_DATA)
585
586 # Create a Contour.
587 ctr1 = Contour(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST,
588 lut = Lut.COLOR, cell_to_point = False, outline = True)
589 ctr1.generateContours(contours = 3)
590
591 # Create a Camera.
592 cam1 = Camera(scene = s, viewport = Viewport.SOUTH_WEST)
593 cam1.elevation(angle = -40)
594
595 # Render the object.
596 s.render(image_name = os.path.join(PYVISI_EXAMPLE_IMAGES_PATH, IMAGE_NAME))
597 \end{python}
598
599 \subsubsection{\ContourOnPlaneCut class}
600
601 \begin{classdesc}{ContourOnPlaneCut}{scene, data_collector,
602 viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False,
603 outline = True}
604 This class works in a similar way to \MapOnPlaneCut, except that it shows a
605 scalar field using contour surfaces cut using a plane.
606 \end{classdesc}
607
608 The following are some of the methods available:\\
609 Methods from \ActorThreeD, \ContourModule, \Transform and \DataSetMapper.
610
611 \subsubsection{\ContourOnPlaneClip class}
612
613 \begin{classdesc}{ContourOnPlaneClip}{scene, data_collector,
614 viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False,
615 outline = True}
616 This class works in a similar way to \MapOnPlaneClip, except that it shows a
617 scalar field using contour surfaces clipped using a plane.
618 \end{classdesc}
619
620 The following are some of the methods available:\\
621 Methods from \ActorThreeD, \ContourModule, \Transform, \Clipper and
622 \DataSetMapper.
623
624 \subsubsection{\StreamLine class}
625
626 \begin{classdesc}{StreamLine}{scene, data_collector,
627 viewport = Viewport.SOUTH_WEST, color_mode = ColorMode.VECTOR, lut = Lut.COLOR,
628 cell_to_point = False, outline = True}
629 Class that shows the direction of particles of a vector field using streamlines.
630 The streamlines can either be colored or grey-scaled, depending on the lookup
631 table used. If the streamlines are colored, there are two possible coloring
632 modes, either using vector data or scalar data.
633 \end{classdesc}
634
635 The following are some of the methods available:\\
636 Methods from \ActorThreeD, \PointSource, \StreamLineModule, \Tube and
637 \DataSetMapper.
638
639 A typical usage of \StreamLine is shown below.
640
641 \begin{python}
642 """
643 Author: John Ngui, john.ngui@uq.edu.au
644 """
645
646 # Import the necessary modules.
647 from esys.pyvisi import Scene, DataCollector, StreamLine, Camera
648 from esys.pyvisi.constant import *
649 import os
650
651 PYVISI_EXAMPLE_MESHES_PATH = "data_meshes"
652 PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images"
653 X_SIZE = 400
654 Y_SIZE = 400
655
656 VECTOR_FIELD_CELL_DATA = "temperature"
657 FILE_3D = "interior_3D.xml"
658 IMAGE_NAME = "streamline.jpg"
659 JPG_RENDERER = Renderer.ONLINE_JPG
660
661 # Create a Scene.
662 s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE,
663 y_size = Y_SIZE)
664
665 # Create a DataCollector reading from a XML file.
666 dc1 = DataCollector(source = Source.XML)
667 dc1.setFileName(file_name = os.path.join(PYVISI_EXAMPLE_MESHES_PATH, FILE_3D))
668
669 # Create a Streamline.
670 sl1 = StreamLine(scene = s, data_collector = dc1,
671 viewport = Viewport.SOUTH_WEST, color_mode = ColorMode.SCALAR,
672 lut = Lut.COLOR, cell_to_point = False, outline = True)
673 sl1.setTubeRadius(radius = 0.02)
674 sl1.setTubeNumberOfSides(3)
675 sl1.setTubeRadiusToVaryByVector()
676 sl1.setPointSourceRadius(0.9)
677
678 # Create a Camera.
679 c1 = Camera(scene = s, viewport = Viewport.SOUTH_WEST)
680 c1.isometricView()
681
682 # Render the object.
683 s.render(image_name = os.path.join(PYVISI_EXAMPLE_IMAGES_PATH, IMAGE_NAME))
684 \end{python}
685
686 \subsubsection{\Carpet class}
687
688 \begin{classdesc}{Carpet}{scene, data_collector,
689 viewport = Viewport.Viewport.SOUTH_WEST, warp_mode = WarpMode.SCALAR,
690 lut = Lut.COLOR, cell_to_point = False, outline = True}
691 This class works in a similar way to \MapOnPlaneCut, except that it shows a
692 scalar field cut on a plane and deformated (warp) along the normal. The
693 plane can either be colored or grey-scaled, depending on the lookup table used.
694 Similarly, the plane can be deformated either using scalar data or vector data.
695 \end{classdesc}
696
697 The following are some of the methods available:\\
698 Methods from \ActorThreeD, \Warp, \Transform and \DataSetMapper.
699
700 A typical usage of \Carpet is shown below.
701
702 \begin{python}
703 """
704 Author: John Ngui, john.ngui@uq.edu.au
705 """
706
707 # Import the necessary modules.
708 from esys.pyvisi import Scene, DataCollector, Carpet, Camera
709 from esys.pyvisi.constant import *
710 import os
711
712 PYVISI_EXAMPLE_MESHES_PATH = "data_meshes"
713 PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images"
714 X_SIZE = 400
715 Y_SIZE = 400
716
717 SCALAR_FIELD_CELL_DATA = "temperature_cell"
718 FILE_3D = "interior_3D.xml"
719 IMAGE_NAME = "carpet.jpg"
720 JPG_RENDERER = Renderer.ONLINE_JPG
721
722 # Create a Scene.
723 s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE,
724 y_size = Y_SIZE)
725
726 # Create a DataCollector reading from a XML file.
727 dc1 = DataCollector(source = Source.XML)
728 dc1.setFileName(file_name = os.path.join(PYVISI_EXAMPLE_MESHES_PATH, FILE_3D))
729 dc1.setActiveScalar(scalar = SCALAR_FIELD_CELL_DATA)
730
731 # Create a Carpet.
732 cpt1 = Carpet(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST,
733 warp_mode = WarpMode.SCALAR, lut = Lut.COLOR, cell_to_point = True,
734 outline = True)
735 cpt1.setPlaneToXY(0.2)
736 cpt1.setScaleFactor(1.9)
737
738 # Create a Camera.
739 c1 = Camera(scene = s, viewport = Viewport.SOUTH_WEST)
740 c1.isometricView()
741
742 # Render the object.
743 s.render(image_name = os.path.join(PYVISI_EXAMPLE_IMAGES_PATH, IMAGE_NAME))
744 \end{python}
745
746 \subsubsection{\Legend class}
747
748 \begin{classdesc}{Legend}{scene, data_collector,
749 viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, legend = LegendType.SCALAR}
750 Class that shows a scalar field on a domain surface. The domain surface
751 can either be colored or grey-scaled, depending on the lookup table used
752 \end{classdesc}
753
754 The following are some of the methods available:\\
755 Methods from \ActorThreeD, \ScalarBar and \DataSetMapper.
756
757 \subsubsection{\Rectangle class}
758
759 \begin{classdesc}{Rectangle}{scene, viewport = Viewport.SOUTH_WEST}
760 Class that generates a rectangle box.
761 \end{classdesc}
762
763 The following are some of the methods available:\\
764 Methods from \ActorThreeD, \CubeSource and \DataSetMapper.
765
766 \subsubsection{\Image class}
767
768 \begin{classdesc}{Image}{scene, image_reader, viewport = Viewport.SOUTH_WEST}
769 Class that displays an image which can be scaled (upwards and downwards) and
770 has interaction capability. The image can also be translated and rotated along
771 the X, Y and Z axes. One of the most common use of this feature is pasting an
772 image on a surface map.
773 \end{classdesc}
774
775 The following are some of the methods available:\\
776 Methods from \ActorThreeD, \PlaneSource and \Transform.
777
778 A typical usage of \Image is shown below.
779
780 \begin{python}
781 """
782 Author: John Ngui, john.ngui@uq.edu.au
783 """
784
785 # Import the necessary modules.
786 from esys.pyvisi import Scene, DataCollector, Map, ImageReader, Image, Camera
787 from esys.pyvisi import GlobalPosition
788 from esys.pyvisi.constant import *
789 import os
790
791 PYVISI_EXAMPLE_MESHES_PATH = "data_meshes"
792 PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images"
793 X_SIZE = 400
794 Y_SIZE = 400
795
796 SCALAR_FIELD_POINT_DATA = "temperature"
797 FILE_3D = "interior_3D.xml"
798 LOAD_IMAGE_NAME = "flinders.jpg"
799 SAVE_IMAGE_NAME = "image.jpg"
800 JPG_RENDERER = Renderer.ONLINE_JPG
801
802 # Create a Scene.
803 s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE,
804 y_size = Y_SIZE)
805
806 # Create a DataCollector reading from a XML file.
807 dc1 = DataCollector(source = Source.XML)
808 dc1.setFileName(file_name = os.path.join(PYVISI_EXAMPLE_MESHES_PATH, FILE_3D))
809
810 # Create a Map.
811 m1 = Map(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST,
812 lut = Lut.COLOR, cell_to_point = False, outline = True)
813 m1.setOpacity(0.3)
814
815 # Create an ImageReader (in place of DataCollector).
816 ir = ImageReader(ImageFormat.JPG)
817 ir.setImageName(image_name = os.path.join(PYVISI_EXAMPLE_MESHES_PATH, \
818 LOAD_IMAGE_NAME))
819
820 # Create an Image.
821 i = Image(scene = s, image_reader = ir, viewport = Viewport.SOUTH_WEST)
822 i.setOpacity(opacity = 0.9)
823 i.translate(0,0,-1)
824 i.setPoint1(GlobalPosition(2,0,0))
825 i.setPoint2(GlobalPosition(0,2,0))
826
827 # Create a Camera.
828 c1 = Camera(scene = s, viewport = Viewport.SOUTH_WEST)
829
830 # Render the image.
831 s.render(image_name = os.path.join(PYVISI_EXAMPLE_IMAGES_PATH, SAVE_IMAGE_NAME))
832 \end{python}
833
834 \subsubsection{\Logo class}
835
836 \begin{classdesc}{Logo}{scene, image_reader, viewport = Viewport.SOUTH_WEST}
837 Class that displays a static image, in particular a logo
838 (i.e. company symbol) and has NO interaction capability. The position and size
839 of the logo can be specified.
840 \end{classdesc}
841
842 The following are some of the methods available:\\
843 Methods from \ImageReslice and \ActorTwoD.
844
845 \subsubsection{\Movie class}
846
847 \begin{classdesc}{Movie}{parameter_file = "make_movie"}
848 Class that creates a file called 'make_movie' by default (if a parameter
849 file name is not speficied) which contains a list of parameters required
850 by the 'ppmtompeg' command to generate a movie from a series of images.
851 \end{classdesc}
852
853 The following are some of the methods available:\\
854 \begin{methoddesc}[Movie]{imageRange}{input_directory, first_image, last_image}
855 The image range from which the movie is to be generated from.
856 \end{methoddesc}
857
858 \begin{methoddesc}[Movie]{imageList}{input_directory, image_list}
859 The image list from which the movie is to be generated from.
860 \end{methoddesc}
861
862 \begin{methoddesc}[Movie]{makeMovie}{movie}
863 Generate the movie.
864 \end{methoddesc}
865
866 A typical usage of \Movie is shown below.
867
868 \begin{python}
869 """
870 Author: John Ngui, john.ngui@uq.edu.au
871 """
872
873 # Import the necessary modules.
874 from esys.pyvisi import Scene, DataCollector, Map, Camera, Velocity, Legend
875 from esys.pyvisi import Movie, LocalPosition
876 from esys.pyvisi.constant import *
877 import os
878
879 PYVISI_EXAMPLE_MESHES_PATH = "data_meshes"
880 PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images"
881 X_SIZE = 800
882 Y_SIZE = 800
883
884 SCALAR_FIELD_POINT_DATA = "temp"
885 FILE_2D = "tempvel-"
886 IMAGE_NAME = "movie"
887 JPG_RENDERER = Renderer.ONLINE_JPG
888
889 # Create a Scene.
890 s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE,
891 y_size = Y_SIZE)
892
893 # Create a DataCollector reading from a XML file.
894 dc1 = DataCollector(source = Source.XML)
895 dc1.setActiveScalar(scalar = SCALAR_FIELD_POINT_DATA)
896
897 # Create a Map.
898 m1 = Map(scene = s, data_collector = dc1,
899 viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False,
900 outline = True)
901
902 # Create a Camera.
903 cam1 = Camera(scene = s, viewport = Viewport.SOUTH_WEST)
904
905 # Create a movie.
906 mov = Movie()
907 #lst = []
908
909 # Read in one file one after another and render the object.
910 for i in range(938, 949):
911 dc1.setFileName(file_name = os.path.join(PYVISI_EXAMPLE_MESHES_PATH, \
912 FILE_2D + "%06d.vtu") % i)
913
914 s.render(image_name = os.path.join(PYVISI_EXAMPLE_IMAGES_PATH, \
915 IMAGE_NAME + "%06d.jpg") % i)
916
917 #lst.append(IMAGE_NAME + "%06d.jpg" % i)
918
919 # Images (first and last inclusive) from which the movie is to be generated.
920 mov.imageRange(input_directory = PYVISI_EXAMPLE_IMAGES_PATH,
921 first_image = IMAGE_NAME + "000938.jpg",
922 last_image = IMAGE_NAME + "000948.jpg")
923
924 # Alternatively, a list of images can be specified.
925 #mov.imageList(input_directory = PYVISI_EXAMPLE_IMAGES_PATH, image_list = lst)
926
927 # Generate the movie.
928 mov.makeMovie(os.path.join(PYVISI_EXAMPLE_IMAGES_PATH, "movie.mpg"))
929 \end{python}
930
931
932 %##############################################################################
933
934
935 \subsection{Coordinate Classes}
936 This subsection details the instances used to position the rendered object.
937
938 \subsubsection{\LocalPosition class}
939
940 \begin{classdesc}{LocalPosition}{x_coor, y_coor}
941 Class that defines the local positioning (X and Y) coordinate system (2D).
942 \end{classdesc}
943
944 \subsubsection{\GlobalPosition class}
945
946 \begin{classdesc}{GlobalPosition}{x_coor, y_coor, z_coor}
947 Class that defines the global positioning (X, Y and Z) coordinate system (3D).
948 \end{classdesc}
949
950
951 %##############################################################################
952
953
954 \subsection{Supporting Classes}
955 This subsection details the supporting classes and their corresponding methods
956 inherited by the input (see Section \ref{INPUT SEC}) and data
957 visualization classes (see Section \ref{DATAVIS SEC}).
958
959 \subsubsection{\ActorThreeD class}
960 Class that defines a 3D actor. \\
961
962 The following are some of the methods available:
963
964 \begin{methoddesc}[Actor3D]{setOpacity}{opacity}
965 Set the opacity (transparency) of the 3D actor.
966 \end{methoddesc}
967
968 \begin{methoddesc}[Actor3D]{setColor}{color}
969 Set the color of the 3D actor.
970 \end{methoddesc}
971
972 \begin{methoddesc}[Actor3D]{setRepresentationToWireframe}{}
973 Set the representation of the 3D actor to wireframe.
974 \end{methoddesc}
975
976 \subsubsection{\ActorTwoD class}
977 Class that defines a 2D actor. \\
978
979 The following are some of the methods available:
980
981 \begin{methoddesc}[Actor2D]{setPosition}{position}
982 Set the position (XY) of the 2D actor. Default position is the lower left hand
983 corner of the window / viewport.
984 \end{methoddesc}
985
986 \subsubsection{\Clipper class}
987 Class that defines a clipper. \\
988
989 The following are some of the methods available:
990
991 \begin{methoddesc}[Clipper]{setInsideOutOn}{}
992 Clips one side of the rendered object.
993 \end{methoddesc}
994
995 \begin{methoddesc}[Clipper]{setInsideOutOff}{}
996 Clips the other side of the rendered object.
997 \end{methoddesc}
998
999 \begin{methoddesc}[Clipper]{setClipValue}{value}
1000 Set the scalar clip value (instead of using a plane) for the clipper.
1001 \end{methoddesc}
1002
1003 \subsubsection{\ContourModule class}
1004 Class that defines the contour module. \\
1005
1006 The following are some of the methods available:
1007
1008 \begin{methoddesc}[ContourModule]{generateContours}{contours = None,
1009 lower_range = None, upper_range = None}
1010 Generate the specified number of contours within the specified range.
1011 In order to generate an iso surface, the 'lower_range' and 'upper_range'
1012 must be equal.
1013 \end{methoddesc}
1014
1015 \subsubsection{\GlyphThreeD class}
1016 Class that defines 3D glyphs. \\
1017
1018 The following are some of the methods available:
1019
1020 \begin{methoddesc}[Glyph3D]{setScaleModeByVector}{}
1021 Set the 3D glyph to scale according to the vector data.
1022 \end{methoddesc}
1023
1024 \begin{methoddesc}[Glyph3D]{setScaleModeByScalar}{}
1025 Set the 3D glyph to scale according to the scalar data.
1026 \end{methoddesc}
1027
1028 \begin{methoddesc}[Glyph3D]{setScaleFactor}{scale_factor}
1029 Set the 3D glyph scale factor.
1030 \end{methoddesc}
1031
1032 \subsubsection{\TensorGlyph class}
1033 Class that defines tensor glyphs. \\
1034
1035 The following are some of the methods available:
1036
1037 \begin{methoddesc}[TensorGlyph]{setScaleFactor}{scale_factor}
1038 Set the scale factor for the tensor glyph.
1039 \end{methoddesc}
1040
1041 \begin{methoddesc}[TensorGlyph]{setMaxScaleFactor}{max_scale_factor}
1042 Set the maximum allowable scale factor for the tensor glyph.
1043 \end{methoddesc}
1044
1045 \subsubsection{\PlaneSource class}
1046 Class that defines a plane source. A plane source is defined by an origin
1047 and two other points, which form the axes (X and Y). \\
1048
1049 The following are some of the methods available:
1050
1051 \begin{methoddesc}[PlaneSource]{setPoint1}{position}
1052 Set the first point from the origin of the plane source.
1053 \end{methoddesc}
1054
1055 \begin{methoddesc}[PlaneSource]{setPoint2}{position}
1056 Set the second point from the origin of the plane source.
1057 \end{methoddesc}
1058
1059 \subsubsection{\PointSource class}
1060 Class that defines the source (location) to generate points. The points are
1061 generated within the radius of a sphere. \\
1062
1063 The following are some of the methods available:
1064
1065 \begin{methoddesc}[PointSource]{setPointSourceRadius}{radius}
1066 Set the radius of the sphere.
1067 \end{methoddesc}
1068
1069 \begin{methoddesc}[PointSource]{setPointSourceCenter}{center}
1070 Set the center of the sphere.
1071 \end{methoddesc}
1072
1073 \begin{methoddesc}[PointSource]{setPointSourceNumberOfPoints}{points}
1074 Set the number of points to generate within the sphere (the larger the
1075 number of points, the more streamlines are generated).
1076 \end{methoddesc}
1077
1078 \subsubsection{\Sphere class}
1079 Class that defines a sphere. \\
1080
1081 The following are some of the methods available:
1082
1083 \begin{methoddesc}[Sphere]{setThetaResolution}{resolution}
1084 Set the theta resolution of the sphere.
1085 \end{methoddesc}
1086
1087 \begin{methoddesc}[Sphere]{setPhiResolution}{resolution}
1088 Set the phi resolution of the sphere.
1089 \end{methoddesc}
1090
1091 \subsubsection{\StreamLineModule class}
1092 Class that defines the streamline module. \\
1093
1094 The following are some of the methods available:
1095
1096 \begin{methoddesc}[StreamLineModule]{setMaximumPropagationTime}{time}
1097 Set the maximum length of the streamline expressed in elapsed time.
1098 \end{methoddesc}
1099
1100 \begin{methoddesc}[StreamLineModule]{setIntegrationToBothDirections}{}
1101 Set the integration to occur both sides: forward (where the streamline
1102 goes) and backward (where the streamline came from).
1103 \end{methoddesc}
1104
1105 \subsubsection{\Transform class}
1106 Class that defines the orientation of planes. \\
1107
1108 The following are some of the methods available:
1109
1110 \begin{methoddesc}[Transform]{translate}{x_offset, y_offset, z_offset}
1111 Translate the rendered object along the x, y and z-axes.
1112 \end{methoddesc}
1113
1114 \begin{methoddesc}[Transform]{rotateX}{angle}
1115 Rotate the plane along the x-axis.
1116 \end{methoddesc}
1117
1118 \begin{methoddesc}[Transform]{rotateY}{angle}
1119 Rotate the plane along the y-axis.
1120 \end{methoddesc}
1121
1122 \begin{methoddesc}[Transform]{rotateZ}{angle}
1123 Rotate the plane along the z-axis.
1124 \end{methoddesc}
1125
1126 \begin{methoddesc}[Transform]{setPlaneToXY}{offset = 0}
1127 Set the plane orthogonal to the z-axis.
1128 \end{methoddesc}
1129
1130 \begin{methoddesc}[Transform]{setPlaneToYZ}{offset = 0}
1131 Set the plane orthogonal to the x-axis.
1132 \end{methoddesc}
1133
1134 \begin{methoddesc}[Transform]{setPlaneToXZ}{offset = 0}
1135 Set the plane orthogonal to the y-axis.
1136 \end{methoddesc}
1137
1138 \subsubsection{\Tube class}
1139 Class that defines the tube wrapped around the streamlines. \\
1140
1141 The following are some of the methods available:
1142
1143 \begin{methoddesc}[Tube]{setTubeRadius}{radius}
1144 Set the radius of the tube.
1145 \end{methoddesc}
1146
1147 \begin{methoddesc}[Tube]{setTubeRadiusToVaryByVector}{}
1148 Set the radius of the tube to vary by vector data.
1149 \end{methoddesc}
1150
1151 \begin{methoddesc}[Tube]{setTubeRadiusToVaryByScalar}{}
1152 Set the radius of the tube to vary by scalar data.
1153 \end{methoddesc}
1154
1155 \subsubsection{\Warp class}
1156 Class that defines the deformation of a scalar field. \\
1157
1158 The following are some of the methods available:
1159
1160 \begin{methoddesc}[Warp]{setScaleFactor}{scale_factor}
1161 Set the displacement scale factor.
1162 \end{methoddesc}
1163
1164 \subsubsection{\MaskPoints class}
1165 Class that defines the masking of points
1166 every n'th point. This is useful to prevent the rendered object
1167 from being cluttered with arrows or ellipsoids. \\
1168
1169 The following are some of the methods available:
1170
1171 \begin{methoddesc}[MaskPoints]{setRatio}{ratio}
1172 Mask every n'th point.
1173 \end{methoddesc}
1174
1175 \begin{methoddesc}[MaskPoints]{randomOn}{}
1176 Enables the randomization of the points selected for masking.
1177 \end{methoddesc}
1178
1179 \subsubsection{\ScalarBar class}
1180 Class that defines a scalar bar. \\
1181
1182 The following are some of the methods available:
1183
1184 \begin{methoddesc}[ScalarBar]{setTitle}{title}
1185 Set the title of the scalar bar.
1186 \end{methoddesc}
1187
1188 \begin{methoddesc}[ScalarBar]{setPosition}{position}
1189 Set the local position of the scalar bar.
1190 \end{methoddesc}
1191
1192 \begin{methoddesc}[ScalarBar]{setOrientationToHorizontal}{}
1193 Set the orientation of the scalar bar to horizontal.
1194 \end{methoddesc}
1195
1196 \begin{methoddesc}[ScalarBar]{setOrientationToVertical}{}
1197 Set the orientation of the scalar bar to vertical.
1198 \end{methoddesc}
1199
1200 \begin{methoddesc}[ScalarBar]{setHeight}{height}
1201 Set the height of the scalar bar.
1202 \end{methoddesc}
1203
1204 \begin{methoddesc}[ScalarBar]{setWidth}{width}
1205 Set the width of the scalar bar.
1206 \end{methoddesc}
1207
1208 \begin{methoddesc}[ScalarBar]{setLabelColor}{color}
1209 Set the color of the scalar bar's label.
1210 \end{methoddesc}
1211
1212 \begin{methoddesc}[ScalarBar]{setTitleColor}{color}
1213 Set the color of the scalar bar's title.
1214 \end{methoddesc}
1215
1216 \subsubsection{\ImageReslice class}
1217 Class that defines an image reslice used to resize static
1218 (no interaction capability) images (i.e. logo). \\
1219
1220 The following are some of the methods available:
1221
1222 \begin{methoddesc}[ImageReslice]{setSize}{size}
1223 Set the size of the image (logo in particular), between 0 and 2. Size 1 (one)
1224 displays the image in its original size (which is the default).
1225 \end{methoddesc}
1226
1227 \subsubsection{\DataSetMapper class}
1228 Class that defines a data set mapper. \\
1229
1230 The following are some of the methods available:
1231
1232 \begin{methoddesc}[DataSetMapper]{setScalarRange}{lower_range, upper_range}
1233 Set the minimum and maximium scalar range for the data set mapper. This
1234 method is called when the range has been specified by the user.
1235 Therefore, the scalar range read from the source will be ignored.
1236 \end{methoddesc}
1237
1238 \subsubsection{\CubeSource class}
1239 Class that defines a cube source. The center of the cube souce defines
1240 the point from which the cube is to be generated and the X, Y
1241 and Z lengths define the length of the cube from the center point. If
1242 X length is 3, then the X length to the left and right of the center
1243 point is 1.5 respectively.\\
1244
1245 The following are some of the methods available:
1246
1247 \begin{methoddesc}[CubeSource]{setCenter}{center}
1248 Set the cube source center.
1249 \end{methoddesc}
1250
1251 \begin{methoddesc}[CubeSource]{setXLength}{length}
1252 Set the cube source length along the x-axis.
1253 \end{methoddesc}
1254
1255 \begin{methoddesc}[CubeSource]{setYLength}{length}
1256 Set the cube source length along the y-axis.
1257 \end{methoddesc}
1258
1259 \begin{methoddesc}[CubeSource]{setZLength}{length}
1260 Set the cube source length along the z-axis.
1261 \end{methoddesc}
1262
1263 \subsubsection{\Rotation class}
1264 Class that sweeps 2D data around the z-axis to create a 3D looking effect. \\
1265
1266 The following are some of the methods available:
1267
1268 \begin{methoddesc}[Rotation]{setResolution}{resolution}
1269 Set the resolution of the sweep for the rotation, which controls the
1270 number of intermediate points
1271 \end{methoddesc}
1272
1273 \begin{methoddesc}[Rotation]{setAngle}{angle}
1274 Set the angle of rotation.
1275 \end{methoddesc}
1276
1277
1278 % #############################################################################
1279
1280
1281 \section{More Examples}
1282 This section shows more examples.
1283
1284 \textsf{Reading A Series of Files}
1285
1286 \begin{python}
1287 """
1288 Author: John Ngui, john.ngui@uq.edu.au
1289 """
1290
1291 # Import the necessary modules.
1292 from esys.pyvisi import Scene, DataCollector, Contour, Camera
1293 from esys.pyvisi.constant import *
1294 import os
1295
1296 PYVISI_EXAMPLE_MESHES_PATH = "data_meshes"
1297 PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images"
1298 X_SIZE = 400
1299 Y_SIZE = 300
1300
1301 SCALAR_FIELD_POINT_DATA_1 = "lava"
1302 SCALAR_FIELD_POINT_DATA_2 = "talus"
1303 FILE_2D = "phi_talus_lava."
1304
1305 IMAGE_NAME = "seriesofreads"
1306 JPG_RENDERER = Renderer.ONLINE_JPG
1307
1308 # Create a Scene.
1309 s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE,
1310 y_size = Y_SIZE)
1311
1312 # Create a DataCollector reading from a XML file.
1313 dc1 = DataCollector(source = Source.XML)
1314 dc1.setActiveScalar(scalar = SCALAR_FIELD_POINT_DATA_1)
1315
1316 # Create a Contour.
1317 mosc1 = Contour(scene = s, data_collector = dc1,
1318 viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False,
1319 outline = True)
1320 mosc1.generateContours(0)
1321
1322 # Create a second DataCollector reading from the same XML file
1323 # but specifying a different scalar field.
1324 dc2 = DataCollector(source = Source.XML)
1325 dc2.setActiveScalar(scalar = SCALAR_FIELD_POINT_DATA_2)
1326
1327 # Create a second Contour.
1328 mosc2 = Contour(scene = s, data_collector = dc2,
1329 viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False,
1330 outline = True)
1331 mosc2.generateContours(0)
1332
1333 # Create a Camera.
1334 cam1 = Camera(scene = s, viewport = Viewport.SOUTH_WEST)
1335
1336 # Read in one file one after another and render the object.
1337 for i in range(99, 104):
1338 dc1.setFileName(file_name = os.path.join(PYVISI_EXAMPLE_MESHES_PATH, \
1339 FILE_2D + "%04d.vtu") % i)
1340 dc2.setFileName(file_name = os.path.join(PYVISI_EXAMPLE_MESHES_PATH, \
1341 FILE_2D + "%04d.vtu") % i)
1342
1343 s.render(image_name = os.path.join(PYVISI_EXAMPLE_IMAGES_PATH, \
1344 IMAGE_NAME + "%04d.jpg") % i)
1345 \end{python}
1346
1347 \textsf{Manipulating A Single File with A Series of Translation}
1348
1349 \begin{python}
1350 """
1351 Author: John Ngui, john.ngui@uq.edu.au
1352 """
1353
1354 # Import the necessary modules.
1355 from esys.pyvisi import Scene, DataCollector, MapOnPlaneCut, Camera
1356 from esys.pyvisi.constant import *
1357 import os
1358
1359 PYVISI_EXAMPLE_MESHES_PATH = "data_meshes"
1360 PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images"
1361 X_SIZE = 400
1362 Y_SIZE = 400
1363
1364 SCALAR_FIELD_POINT_DATA = "temperature"
1365 FILE_3D = "interior_3D.xml"
1366 IMAGE_NAME = "seriesofcuts"
1367 JPG_RENDERER = Renderer.ONLINE_JPG
1368
1369 # Create a Scene.
1370 s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE,
1371 y_size = Y_SIZE)
1372
1373 # Create a DataCollector reading from a XML file.
1374 dc1 = DataCollector(source = Source.XML)
1375 dc1.setFileName(file_name = os.path.join(PYVISI_EXAMPLE_MESHES_PATH, FILE_3D))
1376 dc1.setActiveScalar(scalar = SCALAR_FIELD_POINT_DATA)
1377
1378 # Create a MapOnPlaneCut.
1379 mopc1 = MapOnPlaneCut(scene = s, data_collector = dc1,
1380 viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False,
1381 outline = True)
1382 mopc1.setPlaneToYZ(offset = 0.1)
1383
1384 # Create a Camera.
1385 c1 = Camera(scene = s, viewport = Viewport.SOUTH_WEST)
1386 c1.isometricView()
1387
1388 # Render the object with multiple cuts using a series of translation.
1389 for i in range(0, 5):
1390 s.render(image_name = os.path.join(PYVISI_EXAMPLE_IMAGES_PATH, IMAGE_NAME +
1391 "%02d.jpg") % i)
1392 mopc1.translate(0.6,0,0)
1393 \end{python}
1394
1395 \textsf{Reading Data Directly from Escript Objects}
1396
1397 \begin{python}
1398 """
1399 Author: Lutz Gross, l.gross@uq.edu.au
1400 Author: John Ngui, john.ngui@uq.edu.au
1401 """
1402
1403 # Import the necessary modules.
1404 from esys.escript import *
1405 from esys.escript.linearPDEs import LinearPDE
1406 from esys.finley import Rectangle
1407 from esys.pyvisi import Scene, DataCollector, Map, Camera
1408 from esys.pyvisi.constant import *
1409 import os
1410
1411 PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images"
1412 X_SIZE = 400
1413 Y_SIZE = 400
1414 JPG_RENDERER = Renderer.ONLINE_JPG
1415
1416 #... set some parameters ...
1417 xc=[0.02,0.002]
1418 r=0.001
1419 qc=50.e6
1420 Tref=0.
1421 rhocp=2.6e6
1422 eta=75.
1423 kappa=240.
1424 tend=5.
1425 # ... time, time step size and counter ...
1426 t=0
1427 h=0.1
1428 i=0
1429
1430 #... generate domain ...
1431 mydomain = Rectangle(l0=0.05,l1=0.01,n0=250, n1=50)
1432 #... open PDE ...
1433 mypde=LinearPDE(mydomain)
1434 mypde.setSymmetryOn()
1435 mypde.setValue(A=kappa*kronecker(mydomain),D=rhocp/h,d=eta,y=eta*Tref)
1436 # ... set heat source: ....
1437 x=mydomain.getX()
1438 qH=qc*whereNegative(length(x-xc)-r)
1439 # ... set initial temperature ....
1440 T=Tref
1441
1442 # Create a Scene.
1443 s = Scene(renderer = JPG_RENDERER, x_size = X_SIZE, y_size = Y_SIZE)
1444
1445 # Create a DataCollector reading directly from escript objects.
1446 dc = DataCollector(source = Source.ESCRIPT)
1447
1448 # Create a Map.
1449 m = Map(scene = s, data_collector = dc, \
1450 viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, \
1451 cell_to_point = False, outline = True)
1452
1453 # Create a Camera.
1454 c = Camera(scene = s, viewport = Viewport.SOUTH_WEST)
1455
1456 # ... start iteration:
1457 while t<0.4:
1458 i+=1
1459 t+=h
1460 mypde.setValue(Y=qH+rhocp/h*T)
1461 T=mypde.getSolution()
1462
1463 dc.setData(temp = T)
1464
1465 # Render the object.
1466 s.render(image_name = os.path.join(PYVISI_EXAMPLE_IMAGES_PATH, \
1467 "diffusion%02d.jpg") % i)
1468 \end{python}
1469
1470 \newpage
1471
1472 \section{Useful Keys}
1473 This section shows some of the useful keys when interacting with the rendered
1474 object (in the Online approach).
1475
1476 \begin{table}[ht]
1477 \begin{center}
1478 \begin{tabular}{| c | p{13cm} |}
1479 \hline
1480 \textbf{Key} & \textbf{Description} \\ \hline
1481 Keypress 'c' / 'a' & Toggle between the camera ('c') and object ('a') mode. In
1482 camera mode, mouse events affect the camera position and focal point. In
1483 object mode, mouse events affect the rendered object's element (i.e.
1484 cut surface map, clipped velocity field, streamline, etc) that is under the
1485 mouse pointer.\\ \hline
1486 Mouse button 1 & Rotate the camera around its focal point (if in camera mode)
1487 or rotate the rendered object's element (if in object mode).\\ \hline
1488 Mourse button 2 & Pan the camera (if in camera mode) or translate the rendered
1489 object's element (if in object mode). \\ \hline
1490 Mouse button 3 & Zoom the camera (if in camera mode) or scale the rendered
1491 object's element (if in object mode). \\ \hline
1492 Keypress 3 & Toggle the render window in and out of stereo mode. By default,
1493 red-blue stereo pairs are created. \\ \hline
1494 Keypress 'e' / 'q' & Exit the application if only one file is to be read, or
1495 read and display the next file if multiple files are to be read. \\ \hline
1496 Keypress 's' & Modify the representation of the rendered object to surfaces.
1497 \\ \hline
1498 Keypress 'w' & Modify the representation of the rendered object to wireframe.
1499 \\ \hline
1500 Keypress 'r' & Reset the position of the rendered object to the center.
1501 \\ \hline
1502 \end{tabular}
1503 \caption{Useful keys}
1504 \end{center}
1505 \end{table}
1506
1507
1508 % ############################################################################
1509
1510
1511 \newpage
1512
1513 \section{Sample Output}
1514 This section displays some of the sample output by Pyvisi.
1515
1516 \begin{table}[ht]
1517 \begin{tabular}{c c c}
1518 \includegraphics[width=\thumbnailwidth]{figures/Map} &
1519 \includegraphics[width=\thumbnailwidth]{figures/MapOnPlaneCut} &
1520 \includegraphics[width=\thumbnailwidth]{figures/MapOnPlaneClip} \\
1521 Map & MapOnPlaneCut & MapOnPlaneClip \\
1522 \includegraphics[width=\thumbnailwidth]{figures/MapOnScalarClip} &
1523 \includegraphics[width=\thumbnailwidth]{figures/Velocity} &
1524 \includegraphics[width=\thumbnailwidth]{figures/VelocityOnPlaneCut} \\
1525 MapOnScalarClip & Velocity & VelocityOnPlaneCut \\
1526 \includegraphics[width=\thumbnailwidth]{figures/VelocityOnPlaneClip} &
1527 \includegraphics[width=\thumbnailwidth]{figures/Ellipsoid} &
1528 \includegraphics[width=\thumbnailwidth]{figures/EllipsoidOnPlaneCut} \\
1529 VelocityOnPlaneClip & Ellipsoid & EllipsoidOnPlaneCut \\
1530 \includegraphics[width=\thumbnailwidth]{figures/EllipsoidOnPlaneClip} &
1531 \includegraphics[width=\thumbnailwidth]{figures/Contour} &
1532 \includegraphics[width=\thumbnailwidth]{figures/ContourOnPlaneCut} \\
1533 EllipsoidOnPlaneClip & Contour & ContourOnPlaneCut \\
1534 \end{tabular}
1535 \caption{Sample output}
1536 \end{table}
1537
1538 \begin{table}[t]
1539 \begin{tabular}{c c c}
1540 \includegraphics[width=\thumbnailwidth]{figures/ContourOnPlaneClip} &
1541 \includegraphics[width=\thumbnailwidth]{figures/StreamLine} &
1542 \includegraphics[width=\thumbnailwidth]{figures/Carpet} \\
1543 ContourOnPlaneClip & StreamLine & Carpet \\
1544 \includegraphics[width=\thumbnailwidth]{figures/Rectangle} &
1545 \includegraphics[width=\thumbnailwidth]{figures/Text} &
1546 \includegraphics[width=\thumbnailwidth]{figures/Logo} \\
1547 Rectangle & Text & Logo \\
1548 \includegraphics[width=\thumbnailwidth]{figures/Image} &
1549 \includegraphics[width=\thumbnailwidth]{figures/Legend} \\
1550 Image & Legend \\
1551 \end{tabular}
1552 \caption{Sample Output}
1553 \end{table}
1554
1555

  ViewVC Help
Powered by ViewVC 1.1.26