1 |
\chapter{The module \pyvisi} |
2 |
\label{PYVISI CHAP} |
3 |
\declaremodule{extension}{esys.pyvisi} |
4 |
\modulesynopsis{Python Visualization Interface} |
5 |
|
6 |
\section{Introduction} |
7 |
\pyvisi is a Python module that is used to generate 2D and 3D visualization |
8 |
for escript and its PDE solvers: finley and bruce. This module provides |
9 |
an easy to use interface to the \VTK library (\VTKUrl). There are three |
10 |
approaches of rendering an object. (1) Online - object is rendered on-screen |
11 |
with interaction (i.e. zoom and rotate) capability, (2) Offline - object is |
12 |
rendered off-screen (no window comes up) and (3) Display - object is rendered |
13 |
on-screen but with no interaction capability (able to produce on-the-fly |
14 |
animation). All three approaches have the option to save the rendered object |
15 |
as an image. |
16 |
|
17 |
The following points outline the general guidelines when using \pyvisi: |
18 |
|
19 |
\begin{enumerate} |
20 |
\item Create a \Scene instance, a window in which objects are to be rendered on. |
21 |
\item Create a data input instance (i.e. \DataCollector or \ImageReader), which |
22 |
reads and loads the source data for visualization. |
23 |
\item Create a data visualization instance (i.e. \Map, \Velocity, \Ellipsoid, |
24 |
\Contour, \Carpet, \StreamLine or \Image), which proccesses and manipulates the |
25 |
source data. |
26 |
\item Create a \Camera or \Light instance, which controls the viewing angle and |
27 |
lighting effects. |
28 |
\item Render the object using either the Online, Offline or Display approach. |
29 |
\end{enumerate} |
30 |
\begin{center} |
31 |
\begin{math} |
32 |
scene \rightarrow data \; input \rightarrow data \; visualization \rightarrow |
33 |
camer \, / \, light \rightarrow render |
34 |
\end{math} |
35 |
\end{center} |
36 |
|
37 |
The sequence in which instances are created is very important due to |
38 |
to the dependencies among them. For example, a data input instance must |
39 |
be created BEFORE a data visualization instance, because the source data must |
40 |
be specified before it can be manipulated. If the sequence is switched, |
41 |
the program will throw an error. Similarly, a camera and light instance must |
42 |
be created AFTER a data input instance because the camera and light instance |
43 |
calculates their position based on the source data. If the sequence is switched, |
44 |
the programthe will throw an error . |
45 |
|
46 |
\section{\pyvisi Classes} |
47 |
The following subsections give a brief overview of the important classes |
48 |
and some of their corresponding methods. Please refer to \ReferenceGuide for |
49 |
full details. |
50 |
|
51 |
|
52 |
%############################################################################# |
53 |
|
54 |
|
55 |
\subsection{Scene Classes} |
56 |
This subsection details the instances used to setup the viewing environment. |
57 |
|
58 |
\subsubsection{\Scene class} |
59 |
|
60 |
\begin{classdesc}{Scene}{renderer = Renderer.ONLINE, num_viewport = 1, |
61 |
x_size = 1152, y_size = 864} |
62 |
A scene is a window in which objects are to be rendered on. Only |
63 |
one scene needs to be created. However, a scene may be divided into four |
64 |
smaller windows called viewports (if needed). Each viewport can |
65 |
render a different object. |
66 |
\end{classdesc} |
67 |
|
68 |
The following are some of the methods available: |
69 |
\begin{methoddesc}[Scene]{setBackground}{color} |
70 |
Set the background color of the scene. |
71 |
\end{methoddesc} |
72 |
|
73 |
\begin{methoddesc}[Scene]{render}{image_name = None} |
74 |
Render the object using either the Online, Offline or Display mode. |
75 |
\end{methoddesc} |
76 |
|
77 |
\subsubsection{\Camera class} |
78 |
|
79 |
\begin{classdesc}{Camera}{scene, data_collector, viewport = Viewport.SOUTH_WEST} |
80 |
A camera controls the display angle of the rendered object and one is |
81 |
usually created for a \Scene. However, if a \Scene has four viewports, then a |
82 |
separate camera may be created for each viewport. |
83 |
\end{classdesc} |
84 |
|
85 |
The following are some of the methods available: |
86 |
\begin{methoddesc}[Camera]{setFocalPoint}{position} |
87 |
Set the focal point of the camera. |
88 |
\end{methoddesc} |
89 |
|
90 |
\begin{methoddesc}[Camera]{setPosition}{position} |
91 |
Set the position of the camera. |
92 |
\end{methoddesc} |
93 |
|
94 |
\begin{methoddesc}[Camera]{azimuth}{angle} |
95 |
Rotate the camera to the left and right. |
96 |
\end{methoddesc} |
97 |
|
98 |
\begin{methoddesc}[Camera]{elevation}{angle} |
99 |
Rotate the camera to the top and bottom (only between -90 and 90). |
100 |
\end{methoddesc} |
101 |
|
102 |
\begin{methoddesc}[Camera]{backView}{} |
103 |
Rotate the camera to view the back of the rendered object. |
104 |
\end{methoddesc} |
105 |
|
106 |
\begin{methoddesc}[Camera]{topView}{} |
107 |
Rotate the camera to view the top of the rendered object. |
108 |
\end{methoddesc} |
109 |
|
110 |
\begin{methoddesc}[Camera]{bottomView}{} |
111 |
Rotate the camera to view the bottom of the rendered object. |
112 |
\end{methoddesc} |
113 |
|
114 |
\begin{methoddesc}[Camera]{leftView}{} |
115 |
Rotate the camera to view the left side of the rendered object. |
116 |
\end{methoddesc} |
117 |
|
118 |
\begin{methoddesc}[Camera]{rightView}{} |
119 |
Rotate the camera to view the right side of the rendered object. |
120 |
\end{methoddesc} |
121 |
|
122 |
\begin{methoddesc}[Camera]{isometricView}{} |
123 |
Rotate the camera to view the isometric angle of the rendered object. |
124 |
\end{methoddesc} |
125 |
|
126 |
\begin{methoddesc}[Camera]{dolly}{distance} |
127 |
Move the camera towards (greater than 1) and away (less than 1) from |
128 |
the rendered object. |
129 |
\end{methoddesc} |
130 |
|
131 |
\subsubsection{\Light class} |
132 |
|
133 |
\begin{classdesc}{Light}{scene, data_collector, viewport = Viewport.SOUTH_WEST} |
134 |
A light controls the lighting for the rendered object and works in |
135 |
a similar way to \Camera. |
136 |
\end{classdesc} |
137 |
|
138 |
The following are some of the methods available: |
139 |
\begin{methoddesc}[Light]{setColor}{color} |
140 |
Set the light color. |
141 |
\end{methoddesc} |
142 |
|
143 |
\begin{methoddesc}[Light]{setFocalPoint}{position} |
144 |
Set the focal point of the light. |
145 |
\end{methoddesc} |
146 |
|
147 |
\begin{methoddesc}[Light]{setPosition}{position} |
148 |
Set the position of the light. |
149 |
\end{methoddesc} |
150 |
|
151 |
\begin{methoddesc}[Light]{setAngle}{elevation = 0, azimuth = 0} |
152 |
An alternative to set the position and focal point of the light by using the |
153 |
elevation and azimuth. |
154 |
\end{methoddesc} |
155 |
|
156 |
|
157 |
%############################################################################## |
158 |
|
159 |
|
160 |
\subsection{Input Classes} |
161 |
This subsection details the instances used to read and load the source data |
162 |
for visualization. |
163 |
|
164 |
\subsubsection{\DataCollector class} |
165 |
|
166 |
\begin{classdesc}{DataCollector}{source = Source.XML} |
167 |
A data collector is used to read data either from an XML file (using |
168 |
\texttt{setFileName()}) or from an escript object directly (using |
169 |
\texttt{setData()}). |
170 |
\end{classdesc} |
171 |
|
172 |
The following are some of the methods available: |
173 |
\begin{methoddesc}[DataCollector]{setFileName}{file_name} |
174 |
Set the XML file name to read. |
175 |
\end{methoddesc} |
176 |
|
177 |
\begin{methoddesc}[DataCollector]{setData}{**args} |
178 |
Create data using the \textless name\textgreater=\textless data\textgreater |
179 |
pairing. Assumption is made that the data will be given in the |
180 |
appropriate format. |
181 |
\end{methoddesc} |
182 |
|
183 |
\begin{methoddesc}[DataCollector]{setActiveScalar}{scalar} |
184 |
Specify the scalar field to load. |
185 |
\end{methoddesc} |
186 |
|
187 |
\begin{methoddesc}[DataCollector]{setActiveVector}{vector} |
188 |
Specify the vector field to load. |
189 |
\end{methoddesc} |
190 |
|
191 |
\begin{methoddesc}[DataCollector]{setActiveTensor}{tensor} |
192 |
Specify the tensor field to load. |
193 |
\end{methoddesc} |
194 |
|
195 |
\subsubsection{\ImageReader class} |
196 |
|
197 |
\begin{classdesc}{ImageReader}{format} |
198 |
An image reader is used to read data from an image in a variety of formats. |
199 |
\end{classdesc} |
200 |
|
201 |
The following are some of the methods available: |
202 |
\begin{methoddesc}[ImageReader]{setImageName}{image_name} |
203 |
Set the image name to be read. |
204 |
\end{methoddesc} |
205 |
|
206 |
\subsubsection{\TextTwoD class} |
207 |
|
208 |
\begin{classdesc}{Text2D}{scene, text, viewport = Viewport.SOUTH_WEST} |
209 |
A two-dimensional text is used to annotate the rendered object |
210 |
(i.e. adding titles, authors and labels). |
211 |
\end{classdesc} |
212 |
|
213 |
The following are some of the methods available: |
214 |
\begin{methoddesc}[Text2D]{setFontSize}{size} |
215 |
Set the 2D text size. |
216 |
\end{methoddesc} |
217 |
|
218 |
\begin{methoddesc}[Text2D]{boldOn}{} |
219 |
Bold the 2D text. |
220 |
\end{methoddesc} |
221 |
|
222 |
\begin{methoddesc}[Text2D]{setColor}{color} |
223 |
Set the color of the 2D text. |
224 |
\end{methoddesc} |
225 |
|
226 |
Including methods from \ActorTwoD. |
227 |
|
228 |
|
229 |
%############################################################################## |
230 |
|
231 |
|
232 |
\subsection{Data Visualization Classes} |
233 |
This subsection details the instances used to process and manipulate the source |
234 |
data. The typical usage of the classes is also shown. |
235 |
|
236 |
One point to note is that the source can either be point or cell data. If the |
237 |
source is cell data, a conversion to point data may or may not be |
238 |
required, in order for the object to be rendered correctly. |
239 |
If a conversion is needed, the 'cell_to_point' flag (see below) must be set to |
240 |
'True', otherwise 'False' (which is the default). |
241 |
|
242 |
\subsubsection{\Map class} |
243 |
|
244 |
\begin{classdesc}{Map}{scene, data_collector, |
245 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
246 |
outline = True} |
247 |
Class that shows a scalar field on a domain surface. The domain surface |
248 |
can either be colored or grey-scaled, depending on the lookup table used. |
249 |
\end{classdesc} |
250 |
|
251 |
The following are some of the methods available:\\ |
252 |
Methods from \ActorThreeD. |
253 |
|
254 |
A typical usage of \Map is shown below. |
255 |
|
256 |
\begin{python} |
257 |
# Import the necessary modules. |
258 |
from esys.pyvisi import Scene, DataCollector, Map, Camera |
259 |
from esys.pyvisi.constant import * |
260 |
|
261 |
PYVISI_EXAMPLE_MESHES_PATH = "data_meshes/" |
262 |
PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images/" |
263 |
X_SIZE = 800 |
264 |
Y_SIZE = 800 |
265 |
|
266 |
SCALAR_FIELD_POINT_DATA = "temperature" |
267 |
SCALAR_FIELD_CELL_DATA = "temperature_cell" |
268 |
FILE_3D = "interior_3D.xml" |
269 |
IMAGE_NAME = "map.jpg" |
270 |
JPG_RENDERER = Renderer.ONLINE_JPG |
271 |
|
272 |
|
273 |
# Create a Scene with four viewports. |
274 |
s = Scene(renderer = JPG_RENDERER, num_viewport = 4, x_size = X_SIZE, |
275 |
y_size = Y_SIZE) |
276 |
|
277 |
# Create a DataCollector reading from a XML file. |
278 |
dc1 = DataCollector(source = Source.XML) |
279 |
dc1.setFileName(file_name = PYVISI_EXAMPLE_MESHES_PATH + FILE_3D) |
280 |
dc1.setActiveScalar(scalar = SCALAR_FIELD_POINT_DATA) |
281 |
|
282 |
# Create a Map for the first viewport. |
283 |
m1 = Map(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST, |
284 |
lut = Lut.COLOR, cell_to_point = False, outline = True) |
285 |
m1.setRepresentationToWireframe() |
286 |
|
287 |
# Create a Camera for the first viewport |
288 |
c1 = Camera(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST) |
289 |
c1.isometricView() |
290 |
|
291 |
# Create a second DataCollector reading from the same XML file but specifying |
292 |
# a different scalar field. |
293 |
dc2 = DataCollector(source = Source.XML) |
294 |
dc2.setFileName(file_name = PYVISI_EXAMPLE_MESHES_PATH + FILE_3D) |
295 |
dc2.setActiveScalar(scalar = SCALAR_FIELD_CELL_DATA) |
296 |
|
297 |
# Create a Map for the third viewport. |
298 |
m2 = Map(scene = s, data_collector = dc2, viewport = Viewport.NORTH_EAST, |
299 |
lut = Lut.COLOR, cell_to_point = True, outline = True) |
300 |
|
301 |
# Render the object. |
302 |
s.render(PYVISI_EXAMPLE_IMAGES_PATH + IMAGE_NAME) |
303 |
\end{python} |
304 |
|
305 |
\subsubsection{\MapOnPlaneCut class} |
306 |
|
307 |
\begin{classdesc}{MapOnPlaneCut}{scene, data_collector, |
308 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
309 |
outline = True} |
310 |
This class works in a similar way to \Map, except that it shows a scalar |
311 |
field cut using a plane. The plane can be translated and rotated along the |
312 |
X, Y and Z axes. |
313 |
\end{classdesc} |
314 |
|
315 |
The following are some of the methods available:\\ |
316 |
Methods from \ActorThreeD and \Transform. |
317 |
|
318 |
\subsubsection{\MapOnPlaneClip class} |
319 |
|
320 |
\begin{classdesc}{MapOnPlaneClip}{scene, data_collector, |
321 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
322 |
outline = True} |
323 |
This class works in a similar way to \MapOnPlaneCut, except that it shows a |
324 |
scalar field clipped using a plane. |
325 |
\end{classdesc} |
326 |
|
327 |
The following are some of the methods available:\\ |
328 |
Methods from \ActorThreeD, \Transform and \Clipper. |
329 |
|
330 |
\subsubsection{\MapOnScalarClip class} |
331 |
|
332 |
\begin{classdesc}{MapOnScalarClip}{scene, data_collector, |
333 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
334 |
outline = True} |
335 |
This class works in a similar way to \Map, except that it shows a scalar |
336 |
field clipped using a scalar value. |
337 |
\end{classdesc} |
338 |
|
339 |
The following are some of the methods available:\\ |
340 |
Methods from \ActorThreeD and \Clipper. |
341 |
|
342 |
\subsubsection{\Velocity class} |
343 |
|
344 |
\begin{classdesc}{Velocity}{scene, data_collector, |
345 |
viewport = Viewport.SOUTH_WEST, color_mode = ColorMode.VECTOR, |
346 |
arrow = Arrow.TWO_D, lut = Lut.COLOR, cell_to_point = False, outline = True} |
347 |
Class that shows a vector field using arrows. The arrows can either be |
348 |
colored or grey-scaled, depending on the lookup table used. If the arrows |
349 |
are colored, there are two possible coloring modes, either using vector data or |
350 |
scalar data. Similarly, there are two possible types of arrows, either |
351 |
using two-dimensional or three-dimensional. |
352 |
\end{classdesc} |
353 |
|
354 |
The following are some of the methods available:\\ |
355 |
Methods from \ActorThreeD, \GlyphThreeD and \MaskPoints. |
356 |
|
357 |
\subsubsection{\VelocityOnPlaneCut class} |
358 |
|
359 |
\begin{classdesc}{VelocityOnPlaneCut}{scene, data_collector, |
360 |
arrow = Arrow.TWO_D, color_mode = ColorMode.VECTOR, |
361 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, |
362 |
cell_to_point = False, outline = True} |
363 |
This class works in a similar way to \MapOnPlaneCut, except that |
364 |
it shows a vector field using arrows cut using a plane. |
365 |
\end{classdesc} |
366 |
|
367 |
The following are some of the methods available:\\ |
368 |
Methods from \ActorThreeD, \GlyphThreeD, \Transform and \MaskPoints. |
369 |
|
370 |
A typical usage of \VelocityOnPlaneCut is shown below. |
371 |
|
372 |
\begin{python} |
373 |
# Import the necessary modules |
374 |
from esys.pyvisi import Scene, DataCollector, VelocityOnPlaneCut, Camera |
375 |
from esys.pyvisi.constant import * |
376 |
|
377 |
PYVISI_EXAMPLE_MESHES_PATH = "data_meshes/" |
378 |
PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images/" |
379 |
X_SIZE = 400 |
380 |
Y_SIZE = 400 |
381 |
|
382 |
VECTOR_FIELD_CELL_DATA = "velocity" |
383 |
FILE_3D = "interior_3D.xml" |
384 |
IMAGE_NAME = "velocity.jpg" |
385 |
JPG_RENDERER = Renderer.ONLINE_JPG |
386 |
|
387 |
|
388 |
# Create a Scene with four viewports |
389 |
s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE, |
390 |
y_size = Y_SIZE) |
391 |
|
392 |
# Create a DataCollector reading from a XML file. |
393 |
dc1 = DataCollector(source = Source.XML) |
394 |
dc1.setFileName(file_name = PYVISI_EXAMPLE_MESHES_PATH + FILE_3D) |
395 |
dc1.setActiveVector(vector = VECTOR_FIELD_CELL_DATA) |
396 |
|
397 |
# Create VelocityOnPlaneCut. |
398 |
vopc1 = VelocityOnPlaneCut(scene = s, data_collector = dc1, |
399 |
viewport = Viewport.SOUTH_WEST, color_mode = ColorMode.VECTOR, |
400 |
arrow = Arrow.THREE_D, lut = Lut.COLOR, cell_to_point = False, |
401 |
outline = True) |
402 |
vopc1.setScaleFactor(scale_factor = 0.5) |
403 |
vopc1.setPlaneToXY(offset = 0.5) |
404 |
vopc1.setRatio(2) |
405 |
vopc1.randomOn() |
406 |
|
407 |
# Create a Camera. |
408 |
c1 = Camera(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST) |
409 |
c1.isometricView() |
410 |
c1.elevation(angle = -20) |
411 |
|
412 |
# Render the object. |
413 |
s.render(PYVISI_EXAMPLE_IMAGES_PATH + IMAGE_NAME) |
414 |
\end{python} |
415 |
|
416 |
\subsubsection{\VelocityOnPlaneClip class} |
417 |
|
418 |
\begin{classdesc}{VelocityOnPlaneClip}{scene, data_collector, |
419 |
arrow = Arrow.TWO_D, color_mode = ColorMode.VECTOR, |
420 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, |
421 |
cell_to_point = False, online = True} |
422 |
This class works in a similar way to \MapOnPlaneClip, except that it shows a |
423 |
vector field using arrows clipped using a plane. |
424 |
\end{classdesc} |
425 |
|
426 |
The following are some of the methods available:\\ |
427 |
Methods from \ActorThreeD, \GlyphThreeD, \Transform, \Clipper and |
428 |
\MaskPoints. |
429 |
|
430 |
\subsubsection{\Ellipsoid class} |
431 |
|
432 |
\begin{classdesc}{Ellipsoid}{scene, data_collector, |
433 |
viewport = Viewport = SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
434 |
outline = True} |
435 |
Class that shows a tensor field using ellipsoids. The ellipsoids can either be |
436 |
colored or grey-scaled, depending on the lookup table used. |
437 |
\end{classdesc} |
438 |
|
439 |
The following are some of the methods available:\\ |
440 |
Methods from \ActorThreeD, \Sphere, \TensorGlyph and \MaskPoints. |
441 |
|
442 |
\subsubsection{\EllipsoidOnPlaneCut class} |
443 |
|
444 |
\begin{classdesc}{EllipsoidOnPlaneCut}{scene, data_collector, |
445 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
446 |
outline = True} |
447 |
This class works in a similar way to \MapOnPlaneCut, except that it shows |
448 |
a tensor field using ellipsoids cut using a plane. |
449 |
\end{classdesc} |
450 |
|
451 |
The following are some of the methods available:\\ |
452 |
Methods from \ActorThreeD, \Sphere, \TensorGlyph, \Transform and |
453 |
\MaskPoints. |
454 |
|
455 |
\subsubsection{\EllipsoidOnPlaneClip class} |
456 |
|
457 |
\begin{classdesc}{EllipsoidOnPlaneClip}{scene, data_collector, |
458 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
459 |
outline = True} |
460 |
This class works in a similar way to \MapOnPlaneClip, except that it shows a |
461 |
tensor field using ellipsoids clipped using a plane. |
462 |
\end{classdesc} |
463 |
|
464 |
The following are some of the methods available:\\ |
465 |
Methods from \ActorThreeD, \Sphere, \TensorGlyph, \Transform, \Clipper |
466 |
and \MaskPoints. |
467 |
|
468 |
A typical usage of \EllipsoidOnPlaneClip is shown below. |
469 |
|
470 |
\begin{python} |
471 |
# Import the necessary modules |
472 |
from esys.pyvisi import Scene, DataCollector, EllipsoidOnPlaneClip, Camera |
473 |
from esys.pyvisi.constant import * |
474 |
|
475 |
PYVISI_EXAMPLE_MESHES_PATH = "data_meshes/" |
476 |
PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images/" |
477 |
X_SIZE = 400 |
478 |
Y_SIZE = 400 |
479 |
|
480 |
TENSOR_FIELD_CELL_DATA = "stress_cell" |
481 |
FILE_3D = "interior_3D.xml" |
482 |
IMAGE_NAME = "ellipsoid.jpg" |
483 |
JPG_RENDERER = Renderer.ONLINE_JPG |
484 |
|
485 |
# Create a Scene. |
486 |
s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE, |
487 |
y_size = Y_SIZE) |
488 |
|
489 |
# Create a DataCollector reading from a XML file. |
490 |
dc1 = DataCollector(source = Source.XML) |
491 |
dc1.setFileName(file_name = PYVISI_EXAMPLE_MESHES_PATH + FILE_3D) |
492 |
dc1.setActiveTensor(tensor = TENSOR_FIELD_CELL_DATA) |
493 |
|
494 |
# Create a EllipsoidOnPlaneClip. |
495 |
eopc1 = EllipsoidOnPlaneClip(scene = s, data_collector = dc1, |
496 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = True, |
497 |
outline = True) |
498 |
eopc1.setPlaneToXY() |
499 |
eopc1.setScaleFactor(scale_factor = 0.2) |
500 |
eopc1.rotateX(angle = 10) |
501 |
|
502 |
# Create a camera. |
503 |
c1 = Camera(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST) |
504 |
c1.bottomView() |
505 |
c1.azimuth(angle = -90) |
506 |
c1.elevation(angle = 10) |
507 |
|
508 |
# Render the object. |
509 |
s.render(PYVISI_EXAMPLE_IMAGES_PATH + IMAGE_NAME) |
510 |
\end{python} |
511 |
|
512 |
\subsubsection{\Contour class} |
513 |
|
514 |
\begin{classdesc}{Contour}{scene, data_collector, |
515 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
516 |
outline = True} |
517 |
Class that shows a scalar field using contour surfaces. The contour surfaces can |
518 |
either be colored or grey-scaled, depending on the lookup table used. This |
519 |
class can also be used to generate iso surfaces. |
520 |
\end{classdesc} |
521 |
|
522 |
The following are some of the methods available:\\ |
523 |
Methods from \ActorThreeD and \ContourModule. |
524 |
|
525 |
A typical usage of \Contour is shown below. |
526 |
|
527 |
\begin{python} |
528 |
# Import the necessary modules |
529 |
from esys.pyvisi import Scene, DataCollector, Contour, Camera |
530 |
from esys.pyvisi.constant import * |
531 |
|
532 |
PYVISI_EXAMPLE_MESHES_PATH = "data_meshes/" |
533 |
PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images/" |
534 |
X_SIZE = 400 |
535 |
Y_SIZE = 400 |
536 |
|
537 |
SCALAR_FIELD_POINT_DATA = "temperature" |
538 |
FILE_3D = "interior_3D.xml" |
539 |
IMAGE_NAME = "contour.jpg" |
540 |
JPG_RENDERER = Renderer.ONLINE_JPG |
541 |
|
542 |
|
543 |
# Create a Scene. |
544 |
s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE, |
545 |
y_size = Y_SIZE) |
546 |
|
547 |
# Create a DataCollector reading a XML file. |
548 |
dc1 = DataCollector(source = Source.XML) |
549 |
dc1.setFileName(file_name = PYVISI_EXAMPLE_MESHES_PATH + FILE_3D) |
550 |
dc1.setActiveScalar(scalar = SCALAR_FIELD_POINT_DATA) |
551 |
|
552 |
# Create a Contour. |
553 |
ctr1 = Contour(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST, |
554 |
lut = Lut.COLOR, cell_to_point = False, outline = True) |
555 |
ctr1.generateContours(contours = 3) |
556 |
|
557 |
# Create a Camera. |
558 |
cam1 = Camera(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST) |
559 |
cam1.elevation(angle = -40) |
560 |
|
561 |
# Render the object. |
562 |
s.render(PYVISI_EXAMPLE_IMAGES_PATH + IMAGE_NAME) |
563 |
\end{python} |
564 |
|
565 |
\subsubsection{\ContourOnPlaneCut class} |
566 |
|
567 |
\begin{classdesc}{ContourOnPlaneCut}{scene, data_collector, |
568 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
569 |
outline = True} |
570 |
This class works in a similar way to \MapOnPlaneCut, except that it shows a |
571 |
scalar field using contour surfaces cut using a plane. |
572 |
\end{classdesc} |
573 |
|
574 |
The following are some of the methods available:\\ |
575 |
Methods from \ActorThreeD, \ContourModule and \Transform. |
576 |
|
577 |
\subsubsection{\ContourOnPlaneClip class} |
578 |
|
579 |
\begin{classdesc}{ContourOnPlaneClip}{scene, data_collector, |
580 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
581 |
outline = True} |
582 |
This class works in a similar way to \MapOnPlaneClip, except that it shows a |
583 |
scalar field using contour surfaces clipped using a plane. |
584 |
\end{classdesc} |
585 |
|
586 |
The following are some of the methods available:\\ |
587 |
Methods from \ActorThreeD, \ContourModule, \Transform and \Clipper. |
588 |
|
589 |
\subsubsection{\StreamLine class} |
590 |
|
591 |
\begin{classdesc}{StreamLine}{scene, data_collector, |
592 |
viewport = Viewport.SOUTH_WEST, color_mode = ColorMode.VECTOR, lut = Lut.COLOR, |
593 |
outline = True} |
594 |
Class that shows the direction of particles of a vector field using streamlines. |
595 |
The streamlines can either be colored or grey-scaled, depending on the lookup |
596 |
table used. If the streamlines are colored, there are two possible coloring |
597 |
modes, either using vector data or scalar data. |
598 |
\end{classdesc} |
599 |
|
600 |
The following are some of the methods available:\\ |
601 |
Methods from \ActorThreeD, \PointSource, \StreamLineModule and \Tube. |
602 |
|
603 |
A typical usage of \StreamLine is shown below. |
604 |
|
605 |
\begin{python} |
606 |
# Import the necessary modules. |
607 |
from esys.pyvisi import Scene, DataCollector, StreamLine, Camera |
608 |
from esys.pyvisi.constant import * |
609 |
|
610 |
PYVISI_EXAMPLE_MESHES_PATH = "data_meshes/" |
611 |
PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images/" |
612 |
X_SIZE = 400 |
613 |
Y_SIZE = 400 |
614 |
|
615 |
VECTOR_FIELD_CELL_DATA = "temperature" |
616 |
FILE_3D = "interior_3D.xml" |
617 |
IMAGE_NAME = "streamline.jpg" |
618 |
JPG_RENDERER = Renderer.ONLINE_JPG |
619 |
|
620 |
|
621 |
# Create a Scene. |
622 |
s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE, |
623 |
y_size = Y_SIZE) |
624 |
|
625 |
# Create a DataCollector reading from a XML file. |
626 |
dc1 = DataCollector(source = Source.XML) |
627 |
dc1.setFileName(file_name = PYVISI_EXAMPLE_MESHES_PATH + FILE_3D) |
628 |
|
629 |
# Create a Streamline. |
630 |
sl1 = StreamLine(scene = s, data_collector = dc1, |
631 |
viewport = Viewport.SOUTH_WEST, color_mode = ColorMode.SCALAR, |
632 |
lut = Lut.COLOR, cell_to_point = False, outline = True) |
633 |
sl1.setTubeRadius(radius = 0.02) |
634 |
|
635 |
# Create a Camera. |
636 |
c1 = Camera(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST) |
637 |
c1.isometricView() |
638 |
|
639 |
# Render the object. |
640 |
s.render(PYVISI_EXAMPLE_IMAGES_PATH + IMAGE_NAME) |
641 |
\end{python} |
642 |
|
643 |
\subsubsection{\Carpet class} |
644 |
|
645 |
\begin{classdesc}{Carpet}{scene, data_collector, |
646 |
viewport = Viewport.Viewport.SOUTH_WEST, warp_mode = WarpMode.SCALAR, |
647 |
lut = Lut.COLOR, outline = True} |
648 |
This class works in a similar way to \MapOnPlaneCut, except that it shows a |
649 |
scalar field cut on a plane and deformated (warp) along the normal. The |
650 |
plane can either be colored or grey-scaled, depending on the lookup table used. |
651 |
Similarly, the plane can be deformated either using scalar data or vector data. |
652 |
\end{classdesc} |
653 |
|
654 |
The following are some of the methods available:\\ |
655 |
Methods from \ActorThreeD, \Warp and \Transform. |
656 |
|
657 |
A typical usage of \Carpet is shown below. |
658 |
|
659 |
\begin{python} |
660 |
# Import the necessary modules. |
661 |
from esys.pyvisi import Scene, DataCollector, Carpet, Camera |
662 |
from esys.pyvisi.constant import * |
663 |
|
664 |
PYVISI_EXAMPLE_MESHES_PATH = "data_meshes/" |
665 |
PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images/" |
666 |
X_SIZE = 400 |
667 |
Y_SIZE = 400 |
668 |
|
669 |
SCALAR_FIELD_CELL_DATA = "temperature_cell" |
670 |
FILE_3D = "interior_3D.xml" |
671 |
IMAGE_NAME = "carpet.jpg" |
672 |
JPG_RENDERER = Renderer.ONLINE_JPG |
673 |
|
674 |
# Create a Scene. |
675 |
s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE, |
676 |
y_size = Y_SIZE) |
677 |
|
678 |
# Create a DataCollector reading from a XML file. |
679 |
dc1 = DataCollector(source = Source.XML) |
680 |
dc1.setFileName(file_name = PYVISI_EXAMPLE_MESHES_PATH + FILE_3D) |
681 |
dc1.setActiveScalar(scalar = SCALAR_FIELD_CELL_DATA) |
682 |
|
683 |
# Create a Carpet. |
684 |
cpt1 = Carpet(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST, |
685 |
warp_mode = WarpMode.SCALAR, lut = Lut.COLOR, cell_to_point = True, |
686 |
outline = True) |
687 |
cpt1.setPlaneToXY(0.2) |
688 |
cpt1.setScaleFactor(1.9) |
689 |
|
690 |
# Create a Camera. |
691 |
c1 = Camera(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST) |
692 |
c1.isometricView() |
693 |
|
694 |
# Render the object. |
695 |
s.render(PYVISI_EXAMPLE_IMAGES_PATH + IMAGE_NAME) |
696 |
\end{python} |
697 |
|
698 |
\subsubsection{\Image class} |
699 |
|
700 |
\begin{classdesc}{Image}{scene, image_reader, viewport = Viewport.SOUTH_WEST} |
701 |
Class that displays an image which can be scaled (upwards and downwards) and |
702 |
has interaction capability. The image can also be translated and rotated along |
703 |
the X, Y and Z axes. One of the most common use of this feature is pasting an |
704 |
image on a surface map. |
705 |
\end{classdesc} |
706 |
|
707 |
The following are some of the methods available:\\ |
708 |
Methods from \ActorThreeD, \PlaneSource and \Transform. |
709 |
|
710 |
A typical usage of \Image is shown below. |
711 |
|
712 |
\begin{python} |
713 |
# Import the necessary modules. |
714 |
from esys.pyvisi import Scene, DataCollector, StreamLine, Camera |
715 |
from esys.pyvisi.constant import * |
716 |
|
717 |
PYVISI_EXAMPLE_MESHES_PATH = "data_meshes/" |
718 |
PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images/" |
719 |
X_SIZE = 400 |
720 |
Y_SIZE = 400 |
721 |
|
722 |
VECTOR_FIELD_CELL_DATA = "temperature" |
723 |
FILE_3D = "interior_3D.xml" |
724 |
IMAGE_NAME = "streamline.jpg" |
725 |
JPG_RENDERER = Renderer.ONLINE_JPG |
726 |
|
727 |
|
728 |
# Create a Scene. |
729 |
s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE, |
730 |
y_size = Y_SIZE) |
731 |
|
732 |
# Create a DataCollector reading from a XML file. |
733 |
dc1 = DataCollector(source = Source.XML) |
734 |
dc1.setFileName(file_name = PYVISI_EXAMPLE_MESHES_PATH + FILE_3D) |
735 |
|
736 |
# Create a Streamline. |
737 |
sl1 = StreamLine(scene = s, data_collector = dc1, |
738 |
viewport = Viewport.SOUTH_WEST, color_mode = ColorMode.SCALAR, |
739 |
lut = Lut.COLOR, cell_to_point = False, outline = True) |
740 |
sl1.setTubeRadius(radius = 0.02) |
741 |
|
742 |
# Create a Camera. |
743 |
c1 = Camera(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST) |
744 |
c1.isometricView() |
745 |
|
746 |
# Render the object. |
747 |
s.render(PYVISI_EXAMPLE_IMAGES_PATH + IMAGE_NAME) |
748 |
\end{python} |
749 |
|
750 |
\subsubsection{\Logo class} |
751 |
|
752 |
\begin{classdesc}{Logo}{scene, image_reader, viewport = Viewport.SOUTH_WEST} |
753 |
Class that displays a static image in particular a logo |
754 |
(i.e. company symbol) and has NO interaction capability. |
755 |
\end{classdesc} |
756 |
|
757 |
The following are some of the methods available:\\ |
758 |
Methods from \ImageReslice and \ActorTwoD. |
759 |
|
760 |
|
761 |
%############################################################################## |
762 |
|
763 |
|
764 |
\subsection{Coordinate Classes} |
765 |
This subsection details the instances used to position the rendered object. |
766 |
|
767 |
\begin{classdesc}{LocalPosition}{x_coor, y_coor} |
768 |
Class that defines the local positioning coordinate system (2D). |
769 |
\end{classdesc} |
770 |
|
771 |
\begin{classdesc}{GlobalPosition}{x_coor, y_coor, z_coor} |
772 |
Class that defines the global positioning coordinate system (3D). |
773 |
\end{classdesc} |
774 |
|
775 |
|
776 |
%############################################################################## |
777 |
|
778 |
|
779 |
\subsection{Supporting Classes} |
780 |
This subsection details the supporting classes inherited by the data |
781 |
visualization classes and their available methods. |
782 |
|
783 |
\subsubsection{\ActorThreeD class} |
784 |
|
785 |
The following are some of the methods available: |
786 |
|
787 |
\begin{methoddesc}[Actor3D]{setOpacity}{opacity} |
788 |
Set the opacity (transparency) of the 3D actor. |
789 |
\end{methoddesc} |
790 |
|
791 |
\begin{methoddesc}[Actor3D]{setColor}{color} |
792 |
Set the color of the 3D actor. |
793 |
\end{methoddesc} |
794 |
|
795 |
\begin{methoddesc}[Actor3D]{setRepresentationToWireframe}{} |
796 |
Set the representation of the 3D actor to wireframe. |
797 |
\end{methoddesc} |
798 |
|
799 |
\subsubsection{\ActorTwoD class} |
800 |
|
801 |
The following are some of the methods available: |
802 |
|
803 |
\begin{methoddesc}[Actor2D]{setPosition}{position} |
804 |
Set the position (XY) of the 2D actor. Default position is the lower left hand |
805 |
corner of the window / viewport. |
806 |
\end{methoddesc} |
807 |
|
808 |
\subsubsection{\Clipper class} |
809 |
|
810 |
The following are some of the methods available: |
811 |
|
812 |
\begin{methoddesc}[Clipper]{setInsideOutOn}{} |
813 |
Clips one side of the rendered object. |
814 |
\end{methoddesc} |
815 |
|
816 |
\begin{methoddesc}[Clipper]{setInsideOutOff}{} |
817 |
Clips the other side of the rendered object. |
818 |
\end{methoddesc} |
819 |
|
820 |
\begin{methoddesc}[Clipper]{setClipValue}{value} |
821 |
Set the scalar clip value (instead of using a plane) for the clipper. |
822 |
\end{methoddesc} |
823 |
|
824 |
\subsubsection{\ContourModule class} |
825 |
|
826 |
The following are some of the methods available: |
827 |
|
828 |
\begin{methoddesc}[ContourModule]{generateContours}{contours, |
829 |
lower_range = None, upper_range = None} |
830 |
Generate the specified number of contours within the specified range. |
831 |
In order to generate an iso surface, the 'lower_range' and 'upper_range' |
832 |
must be equal. |
833 |
\end{methoddesc} |
834 |
|
835 |
\subsubsection{\GlyphThreeD class} |
836 |
|
837 |
The following are some of the methods available: |
838 |
|
839 |
\begin{methoddesc}[Glyph3D]{setScaleModeByVector}{} |
840 |
Set the 3D glyph to scale according to the vector data. |
841 |
\end{methoddesc} |
842 |
|
843 |
\begin{methoddesc}[Glyph3D]{setScaleModeByScalar}{} |
844 |
Set the 3D glyph to scale according to the scalar data. |
845 |
\end{methoddesc} |
846 |
|
847 |
\begin{methoddesc}[Glyph3D]{setScaleFactor}{scale_factor} |
848 |
Set the 3D glyph scale factor. |
849 |
\end{methoddesc} |
850 |
|
851 |
\subsubsection{\TensorGlyph class} |
852 |
|
853 |
The following are some of the methods available: |
854 |
|
855 |
\begin{methoddesc}[TensorGlyph]{setScaleFactor}{scale_factor} |
856 |
Set the scale factor for the tensor glyph. |
857 |
\end{methoddesc} |
858 |
|
859 |
\begin{methoddesc}[TensorGlyph]{setMaxScaleFactor}{max_scale_factor} |
860 |
Set the maximum allowable scale factor for the tensor glyph. |
861 |
\end{methoddesc} |
862 |
|
863 |
\subsubsection{\PlaneSource class} |
864 |
|
865 |
The following are some of the methods available: |
866 |
|
867 |
\begin{methoddesc}[PlaneSource]{setPoint1}{position} |
868 |
Set the first point from the origin of the plane source. |
869 |
\end{methoddesc} |
870 |
|
871 |
\begin{methoddesc}[PlaneSource]{setPoint2}{position} |
872 |
Set the second point from the origin of the plane source. |
873 |
\end{methoddesc} |
874 |
|
875 |
\subsubsection{\PointSource class} |
876 |
|
877 |
The following are some of the methods available: |
878 |
|
879 |
\begin{methoddesc}[PointSource]{setPointSourceRadius}{radius} |
880 |
Set the radius of the sphere. |
881 |
\end{methoddesc} |
882 |
|
883 |
\begin{methoddesc}[PointSource]{setPointSourceCenter}{position} |
884 |
Set the center of the sphere. |
885 |
\end{methoddesc} |
886 |
|
887 |
\begin{methoddesc}[PointSource]{setPointSourceNumberOfPoints}{points} |
888 |
Set the number of points to generate within the sphere (the larger the |
889 |
number of points, the more streamlines are generated). |
890 |
\end{methoddesc} |
891 |
|
892 |
\subsubsection{\Sphere class} |
893 |
|
894 |
The following are some of the methods available: |
895 |
|
896 |
\begin{methoddesc}[Sphere]{setThetaResolution}{resolution} |
897 |
Set the theta resolution of the sphere. |
898 |
\end{methoddesc} |
899 |
|
900 |
\begin{methoddesc}[Sphere]{setPhiResolution}{resolution} |
901 |
Set the phi resoluton of the sphere. |
902 |
\end{methoddesc} |
903 |
|
904 |
\subsubsection{\StreamLineModule class} |
905 |
|
906 |
The following are some of the methods available: |
907 |
|
908 |
\begin{methoddesc}[StreamLineModule]{setMaximumPropagationTime}{time} |
909 |
Set the maximum length of the streamline expressed in elapsed time. |
910 |
\end{methoddesc} |
911 |
|
912 |
\begin{methoddesc}[StreamLineModule]{setIntegrationToBothDirections}{} |
913 |
Set the integration to occur both sides: forward (where the streamline |
914 |
goes) and backward (where the streamline came from). |
915 |
\end{methoddesc} |
916 |
|
917 |
\subsubsection{\Transform class} |
918 |
|
919 |
The following are some of the methods available: |
920 |
|
921 |
\begin{methoddesc}[Transform]{translate}{x_offset, y_offset, z_offset} |
922 |
Translate the rendered object along the x, y and z-axes. |
923 |
\end{methoddesc} |
924 |
|
925 |
\begin{methoddesc}[Transform]{rotateX}{angle} |
926 |
Rotate the plane along the x-axis. |
927 |
\end{methoddesc} |
928 |
|
929 |
\begin{methoddesc}[Transform]{rotateY}{angle} |
930 |
Rotate the plane along the y-axis. |
931 |
\end{methoddesc} |
932 |
|
933 |
\begin{methoddesc}[Transform]{rotateZ}{angle} |
934 |
Rotate the plane along the z-axis. |
935 |
\end{methoddesc} |
936 |
|
937 |
\begin{methoddesc}[Transform]{setPlaneToXY}{offset = 0} |
938 |
Set the plane orthogonal to the z-axis. |
939 |
\end{methoddesc} |
940 |
|
941 |
\begin{methoddesc}[Transform]{setPlaneToYZ}{offset = 0} |
942 |
Set the plane orthogonal to the x-axis. |
943 |
\end{methoddesc} |
944 |
|
945 |
\begin{methoddesc}[Transform]{setPlaneToXZ}{offset = 0} |
946 |
Set the plane orthogonal to the y-axis. |
947 |
\end{methoddesc} |
948 |
|
949 |
\subsubsection{\Tube class} |
950 |
|
951 |
The following are some of the methods available: |
952 |
|
953 |
\begin{methoddesc}[Tube]{setTubeRadius}{radius} |
954 |
Set the radius of the tube. |
955 |
\end{methoddesc} |
956 |
|
957 |
\begin{methoddesc}[Tube]{setTubeRadiusToVaryByVector}{} |
958 |
Set the radius of the tube to vary by vector data. |
959 |
\end{methoddesc} |
960 |
|
961 |
\begin{methoddesc}[Tube]{setTubeRadiusToVaryByScalar}{} |
962 |
Set the radius of the tube to vary by scalar data. |
963 |
\end{methoddesc} |
964 |
|
965 |
\subsubsection{\Warp class} |
966 |
|
967 |
The following are some of the methods available: |
968 |
|
969 |
\begin{methoddesc}[Warp]{setScaleFactor}{scale_factor} |
970 |
Set the displacement scale factor. |
971 |
\end{methoddesc} |
972 |
|
973 |
\subsubsection{\MaskPoints class} |
974 |
|
975 |
The following are some of the methods available: |
976 |
|
977 |
\begin{methoddesc}[MaskPoints]{setRatio}{ratio} |
978 |
Mask every nth point. |
979 |
\end{methoddesc} |
980 |
|
981 |
\begin{methoddesc}[MaskPoints]{randomOn}{} |
982 |
Enables the randomization of the points selected for masking. |
983 |
\end{methoddesc} |
984 |
|
985 |
\subsubsection{\ImageReslice class} |
986 |
|
987 |
The following are some of the methods available: |
988 |
|
989 |
\begin{methoddesc}[ImageReslice]{setSize}{size} |
990 |
Set the size of the image, between 0 and 2. Size 1 (one) displays the |
991 |
image in its original size (which is the default). |
992 |
\end{methoddesc} |
993 |
|
994 |
|
995 |
% ############################################################################# |
996 |
|
997 |
|
998 |
\section{More Examples} |
999 |
This section shows more examples. |
1000 |
|
1001 |
\textsf{Reading A Series of Files} |
1002 |
|
1003 |
\begin{python} |
1004 |
# Import the necessary modules. |
1005 |
from esys.pyvisi import Scene, DataCollector, Contour, Camera |
1006 |
from esys.pyvisi.constant import * |
1007 |
|
1008 |
PYVISI_EXAMPLE_MESHES_PATH = "data_meshes/" |
1009 |
PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images/" |
1010 |
X_SIZE = 400 |
1011 |
Y_SIZE = 300 |
1012 |
|
1013 |
SCALAR_FIELD_POINT_DATA_1 = "lava" |
1014 |
SCALAR_FIELD_POINT_DATA_2 = "talus" |
1015 |
FILE_2D = "phi_talus_lava." |
1016 |
FIRST_FILE_NAME = "phi_talus_lava.0099.vtu" |
1017 |
|
1018 |
IMAGE_NAME = "seriesofreads" |
1019 |
JPG_RENDERER = Renderer.ONLINE_JPG |
1020 |
|
1021 |
|
1022 |
# Create a Scene. |
1023 |
s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE, |
1024 |
y_size = Y_SIZE) |
1025 |
|
1026 |
# Create a DataCollector reading from a XML file. An initial file must always |
1027 |
# be assigned when the DataCollector is created, although the same file is |
1028 |
# read again in the for-loop. |
1029 |
dc1 = DataCollector(source = Source.XML) |
1030 |
dc1.setFileName(file_name = PYVISI_EXAMPLE_MESHES_PATH + FIRST_FILE_NAME) |
1031 |
dc1.setActiveScalar(scalar = SCALAR_FIELD_POINT_DATA_1) |
1032 |
|
1033 |
# Create a Contour. |
1034 |
mosc1 = Contour(scene = s, data_collector = dc1, |
1035 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
1036 |
outline = True) |
1037 |
mosc1.generateContours(0) |
1038 |
|
1039 |
# Create a second DataCollector reading from the same XML file. An initial |
1040 |
# file must always be assigned when the DataCollector is created, |
1041 |
# although the same file is read again in the for-loop. |
1042 |
dc2 = DataCollector(source = Source.XML) |
1043 |
dc2.setFileName(file_name = PYVISI_EXAMPLE_MESHES_PATH + FIRST_FILE_NAME) |
1044 |
dc2.setActiveScalar(scalar = SCALAR_FIELD_POINT_DATA_2) |
1045 |
|
1046 |
# Create a second Contour. |
1047 |
mosc2 = Contour(scene = s, data_collector = dc2, |
1048 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
1049 |
outline = True) |
1050 |
mosc2.generateContours(0) |
1051 |
|
1052 |
# Read in one file one after another and render the object. |
1053 |
for i in range(99, 104): |
1054 |
dc1.setFileName(file_name = PYVISI_EXAMPLE_MESHES_PATH + FILE_2D + |
1055 |
"%04d.vtu" % i) |
1056 |
dc1.setActiveScalar(scalar = SCALAR_FIELD_POINT_DATA_1) |
1057 |
dc2.setFileName(file_name = PYVISI_EXAMPLE_MESHES_PATH + FILE_2D + |
1058 |
"%04d.vtu" % i) |
1059 |
dc2.setActiveScalar(scalar = SCALAR_FIELD_POINT_DATA_2) |
1060 |
|
1061 |
s.render(PYVISI_EXAMPLE_IMAGES_PATH + IMAGE_NAME + "%04d.jpg" % i) |
1062 |
\end{python} |
1063 |
|
1064 |
\textsf{Manipulating A Single File with A Series of Translation} |
1065 |
|
1066 |
\begin{python} |
1067 |
# Import the necessary modules. |
1068 |
from esys.pyvisi import Scene, DataCollector, MapOnPlaneCut, Camera |
1069 |
from esys.pyvisi.constant import * |
1070 |
|
1071 |
PYVISI_EXAMPLE_MESHES_PATH = "data_meshes/" |
1072 |
PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images/" |
1073 |
X_SIZE = 400 |
1074 |
Y_SIZE = 400 |
1075 |
|
1076 |
SCALAR_FIELD_POINT_DATA = "temperature" |
1077 |
FILE_3D = "interior_3D.xml" |
1078 |
IMAGE_NAME = "seriesofcuts" |
1079 |
JPG_RENDERER = Renderer.ONLINE_JPG |
1080 |
|
1081 |
|
1082 |
# Create a Scene. |
1083 |
s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE, |
1084 |
y_size = Y_SIZE) |
1085 |
|
1086 |
# Create a DataCollector reading from a XML file. |
1087 |
dc1 = DataCollector(source = Source.XML) |
1088 |
dc1.setFileName(file_name = PYVISI_EXAMPLE_MESHES_PATH + FILE_3D) |
1089 |
dc1.setActiveScalar(scalar = SCALAR_FIELD_POINT_DATA) |
1090 |
|
1091 |
# Create a MapOnPlaneCut. |
1092 |
mopc1 = MapOnPlaneCut(scene = s, data_collector = dc1, |
1093 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
1094 |
outline = True) |
1095 |
mopc1.setPlaneToYZ(offset = 0.1) |
1096 |
|
1097 |
# Create a Camera. |
1098 |
c1 = Camera(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST) |
1099 |
c1.isometricView() |
1100 |
|
1101 |
# Render the object with multiple cuts from a series of translation. |
1102 |
for i in range(0, 5): |
1103 |
s.render(PYVISI_EXAMPLE_IMAGES_PATH + IMAGE_NAME + "%02d.jpg" % i) |
1104 |
mopc1.translate(0.6,0,0) |
1105 |
\end{python} |
1106 |
|
1107 |
\section{Useful Keys} |
1108 |
This section shows some of the useful keys when interacting with the rendered |
1109 |
object. |
1110 |
|
1111 |
\begin{table} |
1112 |
\begin{center} |
1113 |
\begin{tabular}{| c | p{13cm} |} |
1114 |
\hline |
1115 |
\textbf{Key} & \textbf{Description} \\ \hline |
1116 |
Keypress 'c' / 'a' & Toggle between the camera ('c') and object ('a') mode. In |
1117 |
camera mode, mouse events affect the camera position and focal point. In |
1118 |
object mode, mouse events affect the rendered object's element (i.e. |
1119 |
cut surface map, clipped velocity field, streamline, etc) that is under the |
1120 |
mouse pointer.\\ \hline |
1121 |
Mouse button 1 & Rotate the camera around its focal point (if in camera mode) |
1122 |
or rotate the rendered object's element (if in object mode).\\ \hline |
1123 |
Mourse button 2 & Pan the camera (if in camera mode) or translate the rendered |
1124 |
object's element (if in object mode). \\ \hline |
1125 |
Mouse button 3 & Zoom the camera (if in camera mode) or scale the rendered |
1126 |
object's element (if in object mode). \\ \hline |
1127 |
Keypress 3 & Toggle the render window in and out of stereo mode. By default, |
1128 |
red-blue stereo pairs are created. \\ \hline |
1129 |
Keypress 'e' / 'q' & Exit the application if only one file is to be read or |
1130 |
read and display the next file if multiple files are to be read. \\ \hline |
1131 |
Keypress 's' & Modify the representation of the rendered object to surfaces. |
1132 |
\\ \hline |
1133 |
Keypress 'w' & Modify the representation of the rendered object to wireframe. |
1134 |
\\ \hline |
1135 |
\end{tabular} |
1136 |
\end{center} |
1137 |
\end{table} |
1138 |
|
1139 |
|
1140 |
% ############################################################################ |
1141 |
|
1142 |
|
1143 |
\section{Sample Output} |
1144 |
The following section displays a list of sample outputs. |
1145 |
|
1146 |
\begin{table}[h] |
1147 |
\begin{tabular}{c c c} |
1148 |
\includegraphics[width=\thumbnailwidth]{figures/Map} & |
1149 |
\includegraphics[width=\thumbnailwidth]{figures/MapOnPlaneCut} & |
1150 |
\includegraphics[width=\thumbnailwidth]{figures/MapOnPlaneClip} \\ |
1151 |
Map & MapOnPlaneCut & MapOnPlaneClip \\ |
1152 |
\includegraphics[width=\thumbnailwidth]{figures/MapOnScalarClip} & |
1153 |
\includegraphics[width=\thumbnailwidth]{figures/Velocity} & |
1154 |
\includegraphics[width=\thumbnailwidth]{figures/VelocityOnPlaneCut} \\ |
1155 |
MapOnScalarClip & Velocity & VelocityOnPlaneCut \\ |
1156 |
\includegraphics[width=\thumbnailwidth]{figures/VelocityOnPlaneClip} & |
1157 |
\includegraphics[width=\thumbnailwidth]{figures/Ellipsoid} & |
1158 |
\includegraphics[width=\thumbnailwidth]{figures/EllipsoidOnPlaneCut} \\ |
1159 |
VelocityOnPlaneClip & Ellipsoid & EllipsoidOnPlaneCut \\ |
1160 |
\includegraphics[width=\thumbnailwidth]{figures/EllipsoidOnPlaneClip} & |
1161 |
\includegraphics[width=\thumbnailwidth]{figures/Contour} & |
1162 |
\includegraphics[width=\thumbnailwidth]{figures/ContourOnPlaneCut} \\ |
1163 |
EllipsoidOnPlaneClip & Contour & ContourOnPlaneCut \\ |
1164 |
\includegraphics[width=\thumbnailwidth]{figures/ContourOnPlaneClip} & |
1165 |
\includegraphics[width=\thumbnailwidth]{figures/StreamLine} & |
1166 |
\includegraphics[width=\thumbnailwidth]{figures/Carpet} \\ |
1167 |
ContourOnPlaneClip & StreamLine & Carpet \\ |
1168 |
\end{tabular} |
1169 |
\caption{Sample output} |
1170 |
\end{table} |
1171 |
|
1172 |
|
1173 |
|