1 |
gross |
606 |
\chapter{The module \pyvisi} |
2 |
jongui |
879 |
\label{PYVISI CHAP} |
3 |
jongui |
1002 |
\declaremodule{extension}{esys.pyvisi} |
4 |
|
|
\modulesynopsis{Python Visualization Interface} |
5 |
jongui |
879 |
|
6 |
gross |
999 |
\section{Introduction} |
7 |
jongui |
1002 |
\pyvisi is a Python module that is used to generate 2D and 3D visualization |
8 |
|
|
for escript and its PDE solvers: finley and bruce. This module provides |
9 |
jongui |
1078 |
an easy to use interface to the \VTK library (\VTKUrl). There are three |
10 |
|
|
approaches of rendering an object. (1) Online - object is rendered on-screen |
11 |
|
|
with interaction (i.e. zoom and rotate) capability, (2) Offline - object is |
12 |
|
|
rendered off-screen (no window comes up) and (3) Display - object is rendered |
13 |
|
|
on-screen but with no interaction capability (able to produce on-the-fly |
14 |
|
|
animation). All three approaches have the option to save the rendered object |
15 |
|
|
as an image. |
16 |
gross |
606 |
|
17 |
jongui |
1078 |
The following points outline the general guidelines when using \pyvisi: |
18 |
gross |
606 |
|
19 |
jongui |
1002 |
\begin{enumerate} |
20 |
jongui |
1076 |
\item Create a \Scene instance, a window in which objects are to be rendered on. |
21 |
|
|
\item Create a data input instance (i.e. \DataCollector or \ImageReader), which |
22 |
|
|
reads and loads the source data for visualization. |
23 |
jongui |
1035 |
\item Create a data visualization instance (i.e. \Map, \Velocity, \Ellipsoid, |
24 |
jongui |
1076 |
\Contour, \Carpet, \StreamLine or \Image), which proccesses and manipulates the |
25 |
|
|
source data. |
26 |
|
|
\item Create a \Camera or \Light instance, which controls the viewing angle and |
27 |
|
|
lighting effects. |
28 |
jongui |
1078 |
\item Render the object using either the Online, Offline or Display approach. |
29 |
jongui |
1002 |
\end{enumerate} |
30 |
|
|
\begin{center} |
31 |
|
|
\begin{math} |
32 |
jongui |
1081 |
scene \rightarrow data \; input \rightarrow data \; visualization \rightarrow |
33 |
|
|
camer \, / \, light \rightarrow render |
34 |
jongui |
1002 |
\end{math} |
35 |
|
|
\end{center} |
36 |
gross |
999 |
|
37 |
jongui |
1002 |
The sequence in which instances are created is very important due to |
38 |
jongui |
1076 |
to the dependencies among them. For example, a data input instance must |
39 |
jongui |
1078 |
be created BEFORE a data visualization instance, because the source data must |
40 |
|
|
be specified before it can be manipulated. If the sequence is switched, |
41 |
|
|
the program will throw an error. Similarly, a camera and light instance must |
42 |
|
|
be created AFTER a data input instance because the camera and light instance |
43 |
|
|
calculates their position based on the source data. If the sequence is switched, |
44 |
|
|
the programthe will throw an error . |
45 |
gross |
999 |
|
46 |
|
|
\section{\pyvisi Classes} |
47 |
jongui |
1035 |
The following subsections give a brief overview of the important classes |
48 |
|
|
and some of their corresponding methods. Please refer to \ReferenceGuide for |
49 |
|
|
full details. |
50 |
|
|
|
51 |
|
|
|
52 |
|
|
%############################################################################# |
53 |
|
|
|
54 |
|
|
|
55 |
gross |
999 |
\subsection{Scene Classes} |
56 |
jongui |
1035 |
This subsection details the instances used to setup the viewing environment. |
57 |
|
|
|
58 |
|
|
\subsubsection{\Scene class} |
59 |
|
|
|
60 |
jongui |
1002 |
\begin{classdesc}{Scene}{renderer = Renderer.ONLINE, num_viewport = 1, |
61 |
|
|
x_size = 1152, y_size = 864} |
62 |
jongui |
1035 |
A scene is a window in which objects are to be rendered on. Only |
63 |
jongui |
1078 |
one scene needs to be created. However, a scene may be divided into four |
64 |
|
|
smaller windows called viewports (if needed). Each viewport can |
65 |
|
|
render a different object. |
66 |
gross |
999 |
\end{classdesc} |
67 |
|
|
|
68 |
jongui |
1035 |
The following are some of the methods available: |
69 |
|
|
\begin{methoddesc}[Scene]{setBackground}{color} |
70 |
|
|
Set the background color of the scene. |
71 |
|
|
\end{methoddesc} |
72 |
|
|
|
73 |
jongui |
1078 |
\begin{methoddesc}[Scene]{render}{image_name = None} |
74 |
|
|
Render the object using either the Online, Offline or Display mode. |
75 |
jongui |
1035 |
\end{methoddesc} |
76 |
|
|
|
77 |
|
|
\subsubsection{\Camera class} |
78 |
|
|
|
79 |
|
|
\begin{classdesc}{Camera}{scene, data_collector, viewport = Viewport.SOUTH_WEST} |
80 |
|
|
A camera controls the display angle of the rendered object and one is |
81 |
|
|
usually created for a \Scene. However, if a \Scene has four viewports, then a |
82 |
|
|
separate camera may be created for each viewport. |
83 |
gross |
999 |
\end{classdesc} |
84 |
|
|
|
85 |
jongui |
1035 |
The following are some of the methods available: |
86 |
|
|
\begin{methoddesc}[Camera]{setFocalPoint}{position} |
87 |
|
|
Set the focal point of the camera. |
88 |
|
|
\end{methoddesc} |
89 |
|
|
|
90 |
|
|
\begin{methoddesc}[Camera]{setPosition}{position} |
91 |
|
|
Set the position of the camera. |
92 |
|
|
\end{methoddesc} |
93 |
|
|
|
94 |
|
|
\begin{methoddesc}[Camera]{azimuth}{angle} |
95 |
|
|
Rotate the camera to the left and right. |
96 |
|
|
\end{methoddesc} |
97 |
|
|
|
98 |
|
|
\begin{methoddesc}[Camera]{elevation}{angle} |
99 |
|
|
Rotate the camera to the top and bottom (only between -90 and 90). |
100 |
|
|
\end{methoddesc} |
101 |
|
|
|
102 |
|
|
\begin{methoddesc}[Camera]{backView}{} |
103 |
|
|
Rotate the camera to view the back of the rendered object. |
104 |
|
|
\end{methoddesc} |
105 |
|
|
|
106 |
|
|
\begin{methoddesc}[Camera]{topView}{} |
107 |
|
|
Rotate the camera to view the top of the rendered object. |
108 |
|
|
\end{methoddesc} |
109 |
|
|
|
110 |
|
|
\begin{methoddesc}[Camera]{bottomView}{} |
111 |
|
|
Rotate the camera to view the bottom of the rendered object. |
112 |
|
|
\end{methoddesc} |
113 |
|
|
|
114 |
|
|
\begin{methoddesc}[Camera]{leftView}{} |
115 |
|
|
Rotate the camera to view the left side of the rendered object. |
116 |
|
|
\end{methoddesc} |
117 |
|
|
|
118 |
jongui |
1078 |
\begin{methoddesc}[Camera]{rightView}{} |
119 |
jongui |
1035 |
Rotate the camera to view the right side of the rendered object. |
120 |
|
|
\end{methoddesc} |
121 |
|
|
|
122 |
jongui |
1078 |
\begin{methoddesc}[Camera]{isometricView}{} |
123 |
jongui |
1035 |
Rotate the camera to view the isometric angle of the rendered object. |
124 |
|
|
\end{methoddesc} |
125 |
|
|
|
126 |
|
|
\begin{methoddesc}[Camera]{dolly}{distance} |
127 |
|
|
Move the camera towards (greater than 1) and away (less than 1) from |
128 |
|
|
the rendered object. |
129 |
|
|
\end{methoddesc} |
130 |
|
|
|
131 |
|
|
\subsubsection{\Light class} |
132 |
|
|
|
133 |
|
|
\begin{classdesc}{Light}{scene, data_collector, viewport = Viewport.SOUTH_WEST} |
134 |
jongui |
1078 |
A light controls the lighting for the rendered object and works in |
135 |
jongui |
1035 |
a similar way to \Camera. |
136 |
gross |
999 |
\end{classdesc} |
137 |
|
|
|
138 |
jongui |
1035 |
The following are some of the methods available: |
139 |
|
|
\begin{methoddesc}[Light]{setColor}{color} |
140 |
|
|
Set the light color. |
141 |
|
|
\end{methoddesc} |
142 |
|
|
|
143 |
|
|
\begin{methoddesc}[Light]{setFocalPoint}{position} |
144 |
|
|
Set the focal point of the light. |
145 |
|
|
\end{methoddesc} |
146 |
|
|
|
147 |
|
|
\begin{methoddesc}[Light]{setPosition}{position} |
148 |
jongui |
1078 |
Set the position of the light. |
149 |
jongui |
1035 |
\end{methoddesc} |
150 |
|
|
|
151 |
|
|
\begin{methoddesc}[Light]{setAngle}{elevation = 0, azimuth = 0} |
152 |
jongui |
1078 |
An alternative to set the position and focal point of the light by using the |
153 |
|
|
elevation and azimuth. |
154 |
jongui |
1035 |
\end{methoddesc} |
155 |
|
|
|
156 |
|
|
|
157 |
|
|
%############################################################################## |
158 |
|
|
|
159 |
|
|
|
160 |
gross |
999 |
\subsection{Input Classes} |
161 |
jongui |
1035 |
This subsection details the instances used to read and load the source data |
162 |
|
|
for visualization. |
163 |
gross |
999 |
|
164 |
jongui |
1035 |
\subsubsection{\DataCollector class} |
165 |
gross |
999 |
|
166 |
jongui |
1035 |
\begin{classdesc}{DataCollector}{source = Source.XML} |
167 |
|
|
A data collector is used to read data from an XML file or from |
168 |
jongui |
1078 |
an escript object directly. |
169 |
gross |
999 |
\end{classdesc} |
170 |
|
|
|
171 |
jongui |
1035 |
The following are some of the methods available: |
172 |
|
|
\begin{methoddesc}[DataCollector]{setFileName}{file_name} |
173 |
jongui |
1078 |
Set the XML file name to read. |
174 |
jongui |
1035 |
\end{methoddesc} |
175 |
gross |
999 |
|
176 |
jongui |
1035 |
\begin{methoddesc}[DataCollector]{setData}{**args} |
177 |
|
|
Create data using the \textless name\textgreater=\textless data\textgreater |
178 |
|
|
pairing. Assumption is made that the data will be given in the |
179 |
|
|
appropriate format. |
180 |
jongui |
1078 |
|
181 |
|
|
BUG: Reading source data directly from an escript object is NOT |
182 |
|
|
work properly. Therefore this method should NOT be used at this stage. |
183 |
jongui |
1035 |
\end{methoddesc} |
184 |
gross |
999 |
|
185 |
jongui |
1035 |
\begin{methoddesc}[DataCollector]{setActiveScalar}{scalar} |
186 |
|
|
Specify the scalar field to load. |
187 |
|
|
\end{methoddesc} |
188 |
gross |
999 |
|
189 |
jongui |
1035 |
\begin{methoddesc}[DataCollector]{setActiveVector}{vector} |
190 |
|
|
Specify the vector field to load. |
191 |
|
|
\end{methoddesc} |
192 |
gross |
999 |
|
193 |
jongui |
1035 |
\begin{methoddesc}[DataCollector]{setActiveTensor}{tensor} |
194 |
|
|
Specify the tensor field to load. |
195 |
|
|
\end{methoddesc} |
196 |
gross |
999 |
|
197 |
jongui |
1035 |
\subsubsection{\ImageReader class} |
198 |
|
|
|
199 |
|
|
\begin{classdesc}{ImageReader}{format} |
200 |
|
|
An image reader is used to read data from an image in a variety of formats. |
201 |
gross |
999 |
\end{classdesc} |
202 |
|
|
|
203 |
jongui |
1035 |
The following are some of the methods available: |
204 |
|
|
\begin{methoddesc}[ImageReader]{setImageName}{image_name} |
205 |
|
|
Set the image name to be read. |
206 |
|
|
\end{methoddesc} |
207 |
|
|
|
208 |
|
|
\subsubsection{\TextTwoD class} |
209 |
|
|
|
210 |
|
|
\begin{classdesc}{Text2D}{scene, text, viewport = Viewport.SOUTH_WEST} |
211 |
jongui |
1078 |
A two-dimensional text is used to annotate the rendered object |
212 |
|
|
(i.e. adding titles, authors and labels). |
213 |
gross |
999 |
\end{classdesc} |
214 |
|
|
|
215 |
jongui |
1035 |
The following are some of the methods available: |
216 |
|
|
\begin{methoddesc}[Text2D]{setFontSize}{size} |
217 |
|
|
Set the 2D text size. |
218 |
|
|
\end{methoddesc} |
219 |
|
|
|
220 |
|
|
\begin{methoddesc}[Text2D]{boldOn}{} |
221 |
|
|
Bold the 2D text. |
222 |
|
|
\end{methoddesc} |
223 |
|
|
|
224 |
|
|
\begin{methoddesc}[Text2D]{setColor}{color} |
225 |
|
|
Set the color of the 2D text. |
226 |
|
|
\end{methoddesc} |
227 |
|
|
|
228 |
|
|
Including methods from \ActorTwoD. |
229 |
|
|
|
230 |
|
|
|
231 |
|
|
%############################################################################## |
232 |
|
|
|
233 |
|
|
|
234 |
|
|
\subsection{Data Visualization Classes} |
235 |
|
|
This subsection details the instances used to process and manipulate the source |
236 |
|
|
data. |
237 |
jongui |
1078 |
|
238 |
|
|
One point to note is that the source can either be point or cell data. If the |
239 |
|
|
source is cell data, a conversion to point data may or may not be |
240 |
|
|
required, in order for the object to be rendered correctly. |
241 |
|
|
If a conversion is needed, the 'cell_to_point' flag (see below) must be set to |
242 |
|
|
'True', otherwise 'False' (which is the default). |
243 |
|
|
|
244 |
jongui |
1035 |
\subsubsection{\Map class} |
245 |
|
|
|
246 |
|
|
\begin{classdesc}{Map}{scene, data_collector, |
247 |
jongui |
1051 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
248 |
|
|
outline = True} |
249 |
jongui |
1035 |
Class that shows a scalar field on a domain surface. The domain surface |
250 |
|
|
can either be colored or grey-scaled, depending on the lookup table used. |
251 |
gross |
999 |
\end{classdesc} |
252 |
|
|
|
253 |
jongui |
1035 |
The following are some of the methods available:\\ |
254 |
|
|
Methods from \ActorThreeD. |
255 |
|
|
|
256 |
jongui |
1081 |
A typical usage of \Map is shown below, which can be found in |
257 |
|
|
\texttt{\PyvisiExampleDirectory}. |
258 |
|
|
|
259 |
|
|
\begin{python} |
260 |
|
|
# Import the necessary modules. |
261 |
|
|
from esys.pyvisi import Scene, DataCollector, Map, Camera |
262 |
|
|
from esys.pyvisi.constant import * |
263 |
|
|
|
264 |
|
|
PYVISI_EXAMPLE_MESHES_PATH = "data_meshes/" |
265 |
|
|
PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images/" |
266 |
|
|
X_SIZE = 800 |
267 |
|
|
Y_SIZE = 800 |
268 |
|
|
|
269 |
|
|
SCALAR_FIELD_POINT_DATA = "temperature" |
270 |
|
|
SCALAR_FIELD_CELL_DATA = "temperature_cell" |
271 |
|
|
FILE_3D = "interior_3D.xml" |
272 |
|
|
IMAGE_NAME = "map.jpg" |
273 |
|
|
JPG_RENDERER = Renderer.ONLINE_JPG |
274 |
|
|
|
275 |
|
|
|
276 |
|
|
# Create a Scene with four viewports. |
277 |
|
|
s = Scene(renderer = JPG_RENDERER, num_viewport = 4, x_size = X_SIZE, |
278 |
|
|
y_size = Y_SIZE) |
279 |
|
|
|
280 |
|
|
# Create a DataCollector reading from a XML file. |
281 |
|
|
dc1 = DataCollector(source = Source.XML) |
282 |
|
|
dc1.setFileName(file_name = PYVISI_EXAMPLE_MESHES_PATH + FILE_3D) |
283 |
|
|
dc1.setActiveScalar(scalar = SCALAR_FIELD_POINT_DATA) |
284 |
|
|
|
285 |
|
|
# Create a Map for the first viewport. |
286 |
|
|
m1 = Map(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST, |
287 |
|
|
lut = Lut.COLOR, cell_to_point = False, outline = True) |
288 |
|
|
m1.setRepresentationToWireframe() |
289 |
|
|
|
290 |
|
|
# Create a Camera for the first viewport |
291 |
|
|
c1 = Camera(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST) |
292 |
|
|
c1.isometricView() |
293 |
|
|
|
294 |
|
|
# Create a second DataCollector reading from the same XML file but specifying |
295 |
|
|
# a different scalar field. |
296 |
|
|
dc2 = DataCollector(source = Source.XML) |
297 |
|
|
dc2.setFileName(file_name = PYVISI_EXAMPLE_MESHES_PATH + FILE_3D) |
298 |
|
|
dc2.setActiveScalar(scalar = SCALAR_FIELD_CELL_DATA) |
299 |
|
|
|
300 |
|
|
# Create a Map for the third viewport. |
301 |
|
|
m2 = Map(scene = s, data_collector = dc2, viewport = Viewport.NORTH_EAST, |
302 |
|
|
lut = Lut.COLOR, cell_to_point = True, outline = True) |
303 |
|
|
|
304 |
|
|
# Render the object. |
305 |
|
|
s.render(PYVISI_EXAMPLE_IMAGES_PATH + IMAGE_NAME) |
306 |
|
|
\end{python} |
307 |
|
|
|
308 |
jongui |
1035 |
\subsubsection{\MapOnPlaneCut class} |
309 |
|
|
|
310 |
|
|
\begin{classdesc}{MapOnPlaneCut}{scene, data_collector, |
311 |
jongui |
1051 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
312 |
|
|
outline = True} |
313 |
jongui |
1035 |
This class works in a similar way to \Map, except that it shows a scalar |
314 |
jongui |
1079 |
field cut using a plane. The plane can be translated and rotated along the |
315 |
|
|
X, Y and Z axes. |
316 |
gross |
999 |
\end{classdesc} |
317 |
|
|
|
318 |
jongui |
1035 |
The following are some of the methods available:\\ |
319 |
|
|
Methods from \ActorThreeD and \Transform. |
320 |
|
|
|
321 |
|
|
\subsubsection{\MapOnPlaneClip class} |
322 |
|
|
|
323 |
|
|
\begin{classdesc}{MapOnPlaneClip}{scene, data_collector, |
324 |
jongui |
1051 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
325 |
|
|
outline = True} |
326 |
jongui |
1035 |
This class works in a similar way to \MapOnPlaneCut, except that it shows a |
327 |
|
|
scalar field clipped using a plane. |
328 |
gross |
999 |
\end{classdesc} |
329 |
|
|
|
330 |
jongui |
1035 |
The following are some of the methods available:\\ |
331 |
|
|
Methods from \ActorThreeD, \Transform and \Clipper. |
332 |
|
|
|
333 |
|
|
\subsubsection{\MapOnScalarClip class} |
334 |
|
|
|
335 |
|
|
\begin{classdesc}{MapOnScalarClip}{scene, data_collector, |
336 |
jongui |
1051 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
337 |
|
|
outline = True} |
338 |
jongui |
1035 |
This class works in a similar way to \Map, except that it shows a scalar |
339 |
|
|
field clipped using a scalar value. |
340 |
gross |
999 |
\end{classdesc} |
341 |
|
|
|
342 |
jongui |
1035 |
The following are some of the methods available:\\ |
343 |
|
|
Methods from \ActorThreeD and \Clipper. |
344 |
|
|
|
345 |
|
|
\subsubsection{\Velocity class} |
346 |
|
|
|
347 |
|
|
\begin{classdesc}{Velocity}{scene, data_collector, |
348 |
|
|
viewport = Viewport.SOUTH_WEST, color_mode = ColorMode.VECTOR, |
349 |
jongui |
1078 |
arrow = Arrow.TWO_D, lut = Lut.COLOR, cell_to_point = False, outline = True} |
350 |
jongui |
1035 |
Class that shows a vector field using arrows. The arrows can either be |
351 |
|
|
colored or grey-scaled, depending on the lookup table used. If the arrows |
352 |
|
|
are colored, there are two possible coloring modes, either using vector data or |
353 |
|
|
scalar data. Similarly, there are two possible types of arrows, either |
354 |
|
|
using two-dimensional or three-dimensional. |
355 |
gross |
999 |
\end{classdesc} |
356 |
|
|
|
357 |
jongui |
1035 |
The following are some of the methods available:\\ |
358 |
jongui |
1078 |
Methods from \ActorThreeD, \GlyphThreeD and \MaskPoints. |
359 |
jongui |
1035 |
|
360 |
|
|
\subsubsection{\VelocityOnPlaneCut class} |
361 |
|
|
|
362 |
|
|
\begin{classdesc}{VelocityOnPlaneCut}{scene, data_collector, |
363 |
|
|
arrow = Arrow.TWO_D, color_mode = ColorMode.VECTOR, |
364 |
jongui |
1078 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, |
365 |
|
|
cell_to_point = False, outline = True} |
366 |
jongui |
1035 |
This class works in a similar way to \MapOnPlaneCut, except that |
367 |
jongui |
1079 |
it shows a vector field using arrows cut using a plane. |
368 |
gross |
999 |
\end{classdesc} |
369 |
|
|
|
370 |
jongui |
1035 |
The following are some of the methods available:\\ |
371 |
jongui |
1078 |
Methods from \ActorThreeD, \GlyphThreeD, \Transform and \MaskPoints. |
372 |
jongui |
1035 |
|
373 |
jongui |
1081 |
A typical usage of \VelocityOnPlaneCut is shown below, which can be found in |
374 |
|
|
\texttt{\PyvisiExampleDirectory}. |
375 |
|
|
|
376 |
|
|
\begin{python} |
377 |
|
|
# Import the necessary modules |
378 |
|
|
from esys.pyvisi import Scene, DataCollector, VelocityOnPlaneCut, Camera |
379 |
|
|
from esys.pyvisi.constant import * |
380 |
|
|
|
381 |
|
|
PYVISI_EXAMPLE_MESHES_PATH = "data_meshes/" |
382 |
|
|
PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images/" |
383 |
|
|
X_SIZE = 400 |
384 |
|
|
Y_SIZE = 400 |
385 |
|
|
|
386 |
|
|
VECTOR_FIELD_CELL_DATA = "velocity" |
387 |
|
|
FILE_3D = "interior_3D.xml" |
388 |
|
|
IMAGE_NAME = "velocity.jpg" |
389 |
|
|
JPG_RENDERER = Renderer.ONLINE_JPG |
390 |
|
|
|
391 |
|
|
|
392 |
|
|
# Create a Scene with four viewports |
393 |
|
|
s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE, |
394 |
|
|
y_size = Y_SIZE) |
395 |
|
|
|
396 |
|
|
# Create a DataCollector reading from a XML file. |
397 |
|
|
dc1 = DataCollector(source = Source.XML) |
398 |
|
|
dc1.setFileName(file_name = PYVISI_EXAMPLE_MESHES_PATH + FILE_3D) |
399 |
|
|
dc1.setActiveVector(vector = VECTOR_FIELD_CELL_DATA) |
400 |
|
|
|
401 |
|
|
# Create VelocityOnPlaneCut. |
402 |
|
|
vopc1 = VelocityOnPlaneCut(scene = s, data_collector = dc1, |
403 |
|
|
viewport = Viewport.SOUTH_WEST, color_mode = ColorMode.VECTOR, |
404 |
|
|
arrow = Arrow.THREE_D, lut = Lut.COLOR, cell_to_point = False, |
405 |
|
|
outline = True) |
406 |
|
|
vopc1.setScaleFactor(scale_factor = 0.5) |
407 |
|
|
vopc1.setPlaneToXY(offset = 0.5) |
408 |
|
|
vopc1.setRatio(2) |
409 |
|
|
vopc1.randomOn() |
410 |
|
|
|
411 |
|
|
# Create a Camera. |
412 |
|
|
c1 = Camera(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST) |
413 |
|
|
c1.isometricView() |
414 |
|
|
c1.elevation(angle = -20) |
415 |
|
|
|
416 |
|
|
# Render the object. |
417 |
|
|
s.render(PYVISI_EXAMPLE_IMAGES_PATH + IMAGE_NAME) |
418 |
|
|
\end{python} |
419 |
|
|
|
420 |
jongui |
1035 |
\subsubsection{\VelocityOnPlaneClip class} |
421 |
|
|
|
422 |
|
|
\begin{classdesc}{VelocityOnPlaneClip}{scene, data_collector, |
423 |
|
|
arrow = Arrow.TWO_D, color_mode = ColorMode.VECTOR, |
424 |
jongui |
1078 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, |
425 |
|
|
cell_to_point = False, online = True} |
426 |
jongui |
1035 |
This class works in a similar way to \MapOnPlaneClip, except that it shows a |
427 |
|
|
vector field using arrows clipped using a plane. |
428 |
gross |
999 |
\end{classdesc} |
429 |
jongui |
961 |
|
430 |
jongui |
1035 |
The following are some of the methods available:\\ |
431 |
|
|
Methods from \ActorThreeD, \GlyphThreeD, \Transform, \Clipper and |
432 |
jongui |
1078 |
\MaskPoints. |
433 |
jongui |
1035 |
|
434 |
|
|
\subsubsection{\Ellipsoid class} |
435 |
|
|
|
436 |
|
|
\begin{classdesc}{Ellipsoid}{scene, data_collector, |
437 |
jongui |
1079 |
viewport = Viewport = SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
438 |
|
|
outline = True} |
439 |
jongui |
1035 |
Class that shows a tensor field using ellipsoids. The ellipsoids can either be |
440 |
|
|
colored or grey-scaled, depending on the lookup table used. |
441 |
gross |
999 |
\end{classdesc} |
442 |
|
|
|
443 |
jongui |
1035 |
The following are some of the methods available:\\ |
444 |
jongui |
1079 |
Methods from \ActorThreeD, \Sphere, \TensorGlyph and \MaskPoints. |
445 |
jongui |
1035 |
|
446 |
|
|
\subsubsection{\EllipsoidOnPlaneCut class} |
447 |
|
|
|
448 |
|
|
\begin{classdesc}{EllipsoidOnPlaneCut}{scene, data_collector, |
449 |
jongui |
1079 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
450 |
|
|
outline = True} |
451 |
jongui |
1035 |
This class works in a similar way to \MapOnPlaneCut, except that it shows |
452 |
|
|
a tensor field using ellipsoids cut using a plane. |
453 |
gross |
999 |
\end{classdesc} |
454 |
|
|
|
455 |
jongui |
1035 |
The following are some of the methods available:\\ |
456 |
|
|
Methods from \ActorThreeD, \Sphere, \TensorGlyph, \Transform and |
457 |
jongui |
1079 |
\MaskPoints. |
458 |
jongui |
1035 |
|
459 |
|
|
\subsubsection{\EllipsoidOnPlaneClip class} |
460 |
|
|
|
461 |
|
|
\begin{classdesc}{EllipsoidOnPlaneClip}{scene, data_collector, |
462 |
jongui |
1079 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
463 |
|
|
outline = True} |
464 |
jongui |
1035 |
This class works in a similar way to \MapOnPlaneClip, except that it shows a |
465 |
|
|
tensor field using ellipsoids clipped using a plane. |
466 |
gross |
999 |
\end{classdesc} |
467 |
jongui |
1035 |
|
468 |
|
|
The following are some of the methods available:\\ |
469 |
|
|
Methods from \ActorThreeD, \Sphere, \TensorGlyph, \Transform, \Clipper |
470 |
jongui |
1079 |
and \MaskPoints. |
471 |
gross |
999 |
|
472 |
jongui |
1081 |
A typical usage of \EllipsoidOnPlaneClip is shown below, which can be found in |
473 |
|
|
\texttt{\PyvisiExampleDirectory}. |
474 |
|
|
|
475 |
|
|
\begin{python} |
476 |
|
|
# Import the necessary modules |
477 |
|
|
from esys.pyvisi import Scene, DataCollector, EllipsoidOnPlaneClip, Camera |
478 |
|
|
from esys.pyvisi.constant import * |
479 |
|
|
|
480 |
|
|
PYVISI_EXAMPLE_MESHES_PATH = "data_meshes/" |
481 |
|
|
PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images/" |
482 |
|
|
X_SIZE = 400 |
483 |
|
|
Y_SIZE = 400 |
484 |
|
|
|
485 |
|
|
TENSOR_FIELD_CELL_DATA = "stress_cell" |
486 |
|
|
FILE_3D = "interior_3D.xml" |
487 |
|
|
IMAGE_NAME = "ellipsoid.jpg" |
488 |
|
|
JPG_RENDERER = Renderer.ONLINE_JPG |
489 |
|
|
|
490 |
|
|
# Create a Scene. |
491 |
|
|
s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE, |
492 |
|
|
y_size = Y_SIZE) |
493 |
|
|
|
494 |
|
|
# Create a DataCollector reading from a XML file. |
495 |
|
|
dc1 = DataCollector(source = Source.XML) |
496 |
|
|
dc1.setFileName(file_name = PYVISI_EXAMPLE_MESHES_PATH + FILE_3D) |
497 |
|
|
dc1.setActiveTensor(tensor = TENSOR_FIELD_CELL_DATA) |
498 |
|
|
|
499 |
|
|
# Create a EllipsoidOnPlaneClip. |
500 |
|
|
eopc1 = EllipsoidOnPlaneClip(scene = s, data_collector = dc1, |
501 |
|
|
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = True, |
502 |
|
|
outline = True) |
503 |
|
|
eopc1.setPlaneToXY() |
504 |
|
|
eopc1.setScaleFactor(scale_factor = 0.2) |
505 |
|
|
eopc1.rotateX(angle = 10) |
506 |
|
|
|
507 |
|
|
# Create a camera. |
508 |
|
|
c1 = Camera(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST) |
509 |
|
|
c1.bottomView() |
510 |
|
|
c1.azimuth(angle = -90) |
511 |
|
|
c1.elevation(angle = 10) |
512 |
|
|
|
513 |
|
|
# Render the object. |
514 |
|
|
s.render(PYVISI_EXAMPLE_IMAGES_PATH + IMAGE_NAME) |
515 |
|
|
\end{python} |
516 |
|
|
|
517 |
jongui |
1035 |
\subsubsection{\Contour class} |
518 |
|
|
|
519 |
|
|
\begin{classdesc}{Contour}{scene, data_collector, |
520 |
jongui |
1051 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
521 |
|
|
outline = True} |
522 |
jongui |
1079 |
Class that shows a scalar field using contour surfaces. The contour surfaces can |
523 |
jongui |
1035 |
either be colored or grey-scaled, depending on the lookup table used. This |
524 |
|
|
class can also be used to generate iso surfaces. |
525 |
gross |
999 |
\end{classdesc} |
526 |
|
|
|
527 |
jongui |
1035 |
The following are some of the methods available:\\ |
528 |
|
|
Methods from \ActorThreeD and \ContourModule. |
529 |
|
|
|
530 |
jongui |
1081 |
A typical usage of \Contour is shown below, which can be found in |
531 |
|
|
\texttt{\PyvisiExampleDirectory}. |
532 |
|
|
|
533 |
|
|
\begin{python} |
534 |
|
|
# Import the necessary modules |
535 |
|
|
from esys.pyvisi import Scene, DataCollector, Contour, Camera |
536 |
|
|
from esys.pyvisi.constant import * |
537 |
|
|
|
538 |
|
|
PYVISI_EXAMPLE_MESHES_PATH = "data_meshes/" |
539 |
|
|
PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images/" |
540 |
|
|
X_SIZE = 400 |
541 |
|
|
Y_SIZE = 400 |
542 |
|
|
|
543 |
|
|
SCALAR_FIELD_POINT_DATA = "temperature" |
544 |
|
|
FILE_3D = "interior_3D.xml" |
545 |
|
|
IMAGE_NAME = "contour.jpg" |
546 |
|
|
JPG_RENDERER = Renderer.ONLINE_JPG |
547 |
|
|
|
548 |
|
|
|
549 |
|
|
# Create a Scene. |
550 |
|
|
s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE, |
551 |
|
|
y_size = Y_SIZE) |
552 |
|
|
|
553 |
|
|
# Create a DataCollector reading a XML file. |
554 |
|
|
dc1 = DataCollector(source = Source.XML) |
555 |
|
|
dc1.setFileName(file_name = PYVISI_EXAMPLE_MESHES_PATH + FILE_3D) |
556 |
|
|
dc1.setActiveScalar(scalar = SCALAR_FIELD_POINT_DATA) |
557 |
|
|
|
558 |
|
|
# Create a Contour. |
559 |
|
|
ctr1 = Contour(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST, |
560 |
|
|
lut = Lut.COLOR, cell_to_point = False, outline = True) |
561 |
|
|
ctr1.generateContours(contours = 3) |
562 |
|
|
|
563 |
|
|
# Create a Camera. |
564 |
|
|
cam1 = Camera(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST) |
565 |
|
|
cam1.elevation(angle = -40) |
566 |
|
|
|
567 |
|
|
# Render the object. |
568 |
|
|
s.render(PYVISI_EXAMPLE_IMAGES_PATH + IMAGE_NAME) |
569 |
|
|
\end{python} |
570 |
|
|
|
571 |
jongui |
1035 |
\subsubsection{\ContourOnPlaneCut class} |
572 |
|
|
|
573 |
|
|
\begin{classdesc}{ContourOnPlaneCut}{scene, data_collector, |
574 |
jongui |
1051 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
575 |
|
|
outline = True} |
576 |
jongui |
1035 |
This class works in a similar way to \MapOnPlaneCut, except that it shows a |
577 |
jongui |
1079 |
scalar field using contour surfaces cut using a plane. |
578 |
gross |
999 |
\end{classdesc} |
579 |
gross |
606 |
|
580 |
jongui |
1035 |
The following are some of the methods available:\\ |
581 |
|
|
Methods from \ActorThreeD, \ContourModule and \Transform. |
582 |
|
|
|
583 |
|
|
\subsubsection{\ContourOnPlaneClip class} |
584 |
|
|
|
585 |
|
|
\begin{classdesc}{ContourOnPlaneClip}{scene, data_collector, |
586 |
jongui |
1051 |
viewport = Viewport.SOUTH_WEST, lut = Lut.COLOR, cell_to_point = False, |
587 |
|
|
outline = True} |
588 |
jongui |
1035 |
This class works in a similar way to \MapOnPlaneClip, except that it shows a |
589 |
jongui |
1079 |
scalar field using contour surfaces clipped using a plane. |
590 |
gross |
999 |
\end{classdesc} |
591 |
|
|
|
592 |
jongui |
1035 |
The following are some of the methods available:\\ |
593 |
|
|
Methods from \ActorThreeD, \ContourModule, \Transform and \Clipper. |
594 |
|
|
|
595 |
|
|
\subsubsection{\StreamLine class} |
596 |
|
|
|
597 |
|
|
\begin{classdesc}{StreamLine}{scene, data_collector, |
598 |
|
|
viewport = Viewport.SOUTH_WEST, color_mode = ColorMode.VECTOR, lut = Lut.COLOR, |
599 |
|
|
outline = True} |
600 |
|
|
Class that shows the direction of particles of a vector field using streamlines. |
601 |
|
|
The streamlines can either be colored or grey-scaled, depending on the lookup |
602 |
|
|
table used. If the streamlines are colored, there are two possible coloring |
603 |
|
|
modes, either using vector data or scalar data. |
604 |
gross |
999 |
\end{classdesc} |
605 |
|
|
|
606 |
jongui |
1035 |
The following are some of the methods available:\\ |
607 |
|
|
Methods from \ActorThreeD, \PointSource, \StreamLineModule and \Tube. |
608 |
|
|
|
609 |
jongui |
1081 |
A typical usage of \StreamLine is shown below, which can be found in |
610 |
|
|
\texttt{\PyvisiExampleDirectory}. |
611 |
|
|
|
612 |
|
|
\begin{python} |
613 |
|
|
# Import the necessary modules. |
614 |
|
|
from esys.pyvisi import Scene, DataCollector, StreamLine, Camera |
615 |
|
|
from esys.pyvisi.constant import * |
616 |
|
|
|
617 |
|
|
PYVISI_EXAMPLE_MESHES_PATH = "data_meshes/" |
618 |
|
|
PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images/" |
619 |
|
|
X_SIZE = 400 |
620 |
|
|
Y_SIZE = 400 |
621 |
|
|
|
622 |
|
|
VECTOR_FIELD_CELL_DATA = "temperature" |
623 |
|
|
FILE_3D = "interior_3D.xml" |
624 |
|
|
IMAGE_NAME = "streamline.jpg" |
625 |
|
|
JPG_RENDERER = Renderer.ONLINE_JPG |
626 |
|
|
|
627 |
|
|
|
628 |
|
|
# Create a Scene. |
629 |
|
|
s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE, |
630 |
|
|
y_size = Y_SIZE) |
631 |
|
|
|
632 |
|
|
# Create a DataCollector reading from a XML file. |
633 |
|
|
dc1 = DataCollector(source = Source.XML) |
634 |
|
|
dc1.setFileName(file_name = PYVISI_EXAMPLE_MESHES_PATH + FILE_3D) |
635 |
|
|
|
636 |
|
|
# Create a Streamline. |
637 |
|
|
sl1 = StreamLine(scene = s, data_collector = dc1, |
638 |
|
|
viewport = Viewport.SOUTH_WEST, color_mode = ColorMode.SCALAR, |
639 |
|
|
lut = Lut.COLOR, cell_to_point = False, outline = True) |
640 |
|
|
sl1.setTubeRadius(radius = 0.02) |
641 |
|
|
|
642 |
|
|
# Create a Camera. |
643 |
|
|
c1 = Camera(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST) |
644 |
|
|
c1.isometricView() |
645 |
|
|
|
646 |
|
|
# Render the object. |
647 |
|
|
s.render(PYVISI_EXAMPLE_IMAGES_PATH + IMAGE_NAME) |
648 |
|
|
\end{python} |
649 |
|
|
|
650 |
jongui |
1035 |
\subsubsection{\Carpet class} |
651 |
|
|
|
652 |
|
|
\begin{classdesc}{Carpet}{scene, data_collector, |
653 |
|
|
viewport = Viewport.Viewport.SOUTH_WEST, warp_mode = WarpMode.SCALAR, |
654 |
|
|
lut = Lut.COLOR, outline = True} |
655 |
|
|
This class works in a similar way to \MapOnPlaneCut, except that it shows a |
656 |
jongui |
1079 |
scalar field cut on a plane and deformated (warp) along the normal. The |
657 |
|
|
plane can either be colored or grey-scaled, depending on the lookup table used. |
658 |
jongui |
1035 |
Similarly, the plane can be deformated either using scalar data or vector data. |
659 |
gross |
999 |
\end{classdesc} |
660 |
|
|
|
661 |
jongui |
1035 |
The following are some of the methods available:\\ |
662 |
|
|
Methods from \ActorThreeD, \Warp and \Transform. |
663 |
gross |
999 |
|
664 |
jongui |
1081 |
A typical usage of \Carpet is shown below, which can be found in |
665 |
|
|
\texttt{\PyvisiExampleDirectory}. |
666 |
|
|
|
667 |
|
|
\begin{python} |
668 |
|
|
# Import the necessary modules. |
669 |
|
|
from esys.pyvisi import Scene, DataCollector, Carpet, Camera |
670 |
|
|
from esys.pyvisi.constant import * |
671 |
|
|
|
672 |
|
|
PYVISI_EXAMPLE_MESHES_PATH = "data_meshes/" |
673 |
|
|
PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images/" |
674 |
|
|
X_SIZE = 400 |
675 |
|
|
Y_SIZE = 400 |
676 |
|
|
|
677 |
|
|
SCALAR_FIELD_CELL_DATA = "temperature_cell" |
678 |
|
|
FILE_3D = "interior_3D.xml" |
679 |
|
|
IMAGE_NAME = "carpet.jpg" |
680 |
|
|
JPG_RENDERER = Renderer.ONLINE_JPG |
681 |
|
|
|
682 |
|
|
# Create a Scene. |
683 |
|
|
s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE, |
684 |
|
|
y_size = Y_SIZE) |
685 |
|
|
|
686 |
|
|
# Create a DataCollector reading from a XML file. |
687 |
|
|
dc1 = DataCollector(source = Source.XML) |
688 |
|
|
dc1.setFileName(file_name = PYVISI_EXAMPLE_MESHES_PATH + FILE_3D) |
689 |
|
|
dc1.setActiveScalar(scalar = SCALAR_FIELD_CELL_DATA) |
690 |
|
|
|
691 |
|
|
# Create a Carpet. |
692 |
|
|
cpt1 = Carpet(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST, |
693 |
|
|
warp_mode = WarpMode.SCALAR, lut = Lut.COLOR, cell_to_point = True, |
694 |
|
|
outline = True) |
695 |
|
|
cpt1.setPlaneToXY(0.2) |
696 |
|
|
cpt1.setScaleFactor(1.9) |
697 |
|
|
|
698 |
|
|
# Create a Camera. |
699 |
|
|
c1 = Camera(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST) |
700 |
|
|
c1.isometricView() |
701 |
|
|
|
702 |
|
|
# Render the object. |
703 |
|
|
s.render(PYVISI_EXAMPLE_IMAGES_PATH + IMAGE_NAME) |
704 |
|
|
\end{python} |
705 |
|
|
|
706 |
jongui |
1035 |
\subsubsection{\Image class} |
707 |
|
|
|
708 |
|
|
\begin{classdesc}{Image}{scene, image_reader, viewport = Viewport.SOUTH_WEST} |
709 |
|
|
Class that displays an image which can be scaled (upwards and downwards). The |
710 |
|
|
image can also be translated and rotated along the X, Y and Z axes. |
711 |
gross |
999 |
\end{classdesc} |
712 |
|
|
|
713 |
jongui |
1035 |
The following are some of the methods available:\\ |
714 |
|
|
Methods from \ActorThreeD, \PlaneSource and \Transform. |
715 |
|
|
|
716 |
jongui |
1081 |
A typical usage of \Image is shown below, which can be found in |
717 |
|
|
\texttt{\PyvisiExampleDirectory}. |
718 |
jongui |
1035 |
|
719 |
jongui |
1081 |
\begin{python} |
720 |
|
|
# Import the necessary modules. |
721 |
|
|
from esys.pyvisi import Scene, DataCollector, StreamLine, Camera |
722 |
|
|
from esys.pyvisi.constant import * |
723 |
|
|
|
724 |
|
|
PYVISI_EXAMPLE_MESHES_PATH = "data_meshes/" |
725 |
|
|
PYVISI_EXAMPLE_IMAGES_PATH = "data_sample_images/" |
726 |
|
|
X_SIZE = 400 |
727 |
|
|
Y_SIZE = 400 |
728 |
|
|
|
729 |
|
|
VECTOR_FIELD_CELL_DATA = "temperature" |
730 |
|
|
FILE_3D = "interior_3D.xml" |
731 |
|
|
IMAGE_NAME = "streamline.jpg" |
732 |
|
|
JPG_RENDERER = Renderer.ONLINE_JPG |
733 |
|
|
|
734 |
|
|
|
735 |
|
|
# Create a Scene. |
736 |
|
|
s = Scene(renderer = JPG_RENDERER, num_viewport = 1, x_size = X_SIZE, |
737 |
|
|
y_size = Y_SIZE) |
738 |
|
|
|
739 |
|
|
# Create a DataCollector reading from a XML file. |
740 |
|
|
dc1 = DataCollector(source = Source.XML) |
741 |
|
|
dc1.setFileName(file_name = PYVISI_EXAMPLE_MESHES_PATH + FILE_3D) |
742 |
|
|
|
743 |
|
|
# Create a Streamline. |
744 |
|
|
sl1 = StreamLine(scene = s, data_collector = dc1, |
745 |
|
|
viewport = Viewport.SOUTH_WEST, color_mode = ColorMode.SCALAR, |
746 |
|
|
lut = Lut.COLOR, cell_to_point = False, outline = True) |
747 |
|
|
sl1.setTubeRadius(radius = 0.02) |
748 |
|
|
|
749 |
|
|
# Create a Camera. |
750 |
|
|
c1 = Camera(scene = s, data_collector = dc1, viewport = Viewport.SOUTH_WEST) |
751 |
|
|
c1.isometricView() |
752 |
|
|
|
753 |
|
|
# Render the object. |
754 |
|
|
s.render(PYVISI_EXAMPLE_IMAGES_PATH + IMAGE_NAME) |
755 |
|
|
\end{python} |
756 |
|
|
|
757 |
|
|
|
758 |
jongui |
1035 |
%############################################################################## |
759 |
|
|
|
760 |
|
|
|
761 |
jongui |
1076 |
\subsection{Coordinate Classes} |
762 |
jongui |
1035 |
This subsection details the instances used to position the rendered object. |
763 |
|
|
|
764 |
|
|
\begin{classdesc}{LocalPosition}{x_coor, y_coor} |
765 |
|
|
Class that defines the local positioning coordinate system (2D). |
766 |
gross |
999 |
\end{classdesc} |
767 |
|
|
|
768 |
jongui |
1035 |
\begin{classdesc}{GlobalPosition}{x_coor, y_coor, z_coor} |
769 |
|
|
Class that defines the global positioning coordinate system (3D). |
770 |
gross |
999 |
\end{classdesc} |
771 |
|
|
|
772 |
jongui |
1035 |
|
773 |
|
|
%############################################################################## |
774 |
|
|
|
775 |
|
|
|
776 |
|
|
\subsection{Supporting Classes} |
777 |
|
|
This subsection details the supporting classes inherited by the data |
778 |
jongui |
1079 |
visualization classes and their available methods. |
779 |
jongui |
1035 |
|
780 |
|
|
\subsubsection{\ActorThreeD class} |
781 |
|
|
|
782 |
|
|
The following are some of the methods available: |
783 |
|
|
|
784 |
|
|
\begin{methoddesc}[Actor3D]{setOpacity}{opacity} |
785 |
|
|
Set the opacity (transparency) of the 3D actor. |
786 |
|
|
\end{methoddesc} |
787 |
|
|
|
788 |
|
|
\begin{methoddesc}[Actor3D]{setColor}{color} |
789 |
|
|
Set the color of the 3D actor. |
790 |
|
|
\end{methoddesc} |
791 |
|
|
|
792 |
|
|
\begin{methoddesc}[Actor3D]{setRepresentationToWireframe}{} |
793 |
|
|
Set the representation of the 3D actor to wireframe. |
794 |
|
|
\end{methoddesc} |
795 |
|
|
|
796 |
|
|
\subsubsection{\ActorTwoD class} |
797 |
|
|
|
798 |
|
|
The following are some of the methods available: |
799 |
|
|
|
800 |
|
|
\begin{methoddesc}[Actor2D]{setPosition}{position} |
801 |
|
|
Set the position (XY) of the 2D actor. Default position is the lower left hand |
802 |
|
|
corner of the window / viewport. |
803 |
|
|
\end{methoddesc} |
804 |
|
|
|
805 |
|
|
\subsubsection{\Clipper class} |
806 |
|
|
|
807 |
|
|
The following are some of the methods available: |
808 |
|
|
|
809 |
|
|
\begin{methoddesc}[Clipper]{setInsideOutOn}{} |
810 |
|
|
Clips one side of the rendered object. |
811 |
|
|
\end{methoddesc} |
812 |
|
|
|
813 |
|
|
\begin{methoddesc}[Clipper]{setInsideOutOff}{} |
814 |
|
|
Clips the other side of the rendered object. |
815 |
|
|
\end{methoddesc} |
816 |
|
|
|
817 |
|
|
\begin{methoddesc}[Clipper]{setClipValue}{value} |
818 |
jongui |
1079 |
Set the scalar clip value (instead of using a plane) for the clipper. |
819 |
jongui |
1035 |
\end{methoddesc} |
820 |
|
|
|
821 |
|
|
\subsubsection{\ContourModule class} |
822 |
|
|
|
823 |
|
|
The following are some of the methods available: |
824 |
|
|
|
825 |
|
|
\begin{methoddesc}[ContourModule]{generateContours}{contours, |
826 |
|
|
lower_range = None, upper_range = None} |
827 |
|
|
Generate the specified number of contours within the specified range. |
828 |
jongui |
1079 |
In order to generate an iso surface, the 'lower_range' and 'upper_range' |
829 |
|
|
must be equal. |
830 |
jongui |
1035 |
\end{methoddesc} |
831 |
|
|
|
832 |
|
|
\subsubsection{\GlyphThreeD class} |
833 |
|
|
|
834 |
|
|
The following are some of the methods available: |
835 |
|
|
|
836 |
|
|
\begin{methoddesc}[Glyph3D]{setScaleModeByVector}{} |
837 |
|
|
Set the 3D glyph to scale according to the vector data. |
838 |
|
|
\end{methoddesc} |
839 |
|
|
|
840 |
|
|
\begin{methoddesc}[Glyph3D]{setScaleModeByScalar}{} |
841 |
|
|
Set the 3D glyph to scale according to the scalar data. |
842 |
|
|
\end{methoddesc} |
843 |
|
|
|
844 |
|
|
\begin{methoddesc}[Glyph3D]{setScaleFactor}{scale_factor} |
845 |
|
|
Set the 3D glyph scale factor. |
846 |
|
|
\end{methoddesc} |
847 |
|
|
|
848 |
|
|
\subsubsection{\TensorGlyph class} |
849 |
|
|
|
850 |
|
|
The following are some of the methods available: |
851 |
|
|
|
852 |
|
|
\begin{methoddesc}[TensorGlyph]{setScaleFactor}{scale_factor} |
853 |
|
|
Set the scale factor for the tensor glyph. |
854 |
|
|
\end{methoddesc} |
855 |
|
|
|
856 |
jongui |
1079 |
\begin{methoddesc}[TensorGlyph]{setMaxScaleFactor}{max_scale_factor} |
857 |
|
|
Set the maximum allowable scale factor for the tensor glyph. |
858 |
|
|
\end{methoddesc} |
859 |
|
|
|
860 |
jongui |
1035 |
\subsubsection{\PlaneSource class} |
861 |
|
|
|
862 |
|
|
The following are some of the methods available: |
863 |
|
|
|
864 |
|
|
\begin{methoddesc}[PlaneSource]{setPoint1}{position} |
865 |
|
|
Set the first point from the origin of the plane source. |
866 |
|
|
\end{methoddesc} |
867 |
|
|
|
868 |
|
|
\begin{methoddesc}[PlaneSource]{setPoint2}{position} |
869 |
|
|
Set the second point from the origin of the plane source. |
870 |
|
|
\end{methoddesc} |
871 |
|
|
|
872 |
|
|
\subsubsection{\PointSource class} |
873 |
|
|
|
874 |
|
|
The following are some of the methods available: |
875 |
|
|
|
876 |
|
|
\begin{methoddesc}[PointSource]{setPointSourceRadius}{radius} |
877 |
|
|
Set the radius of the sphere. |
878 |
|
|
\end{methoddesc} |
879 |
|
|
|
880 |
jongui |
1079 |
\begin{methoddesc}[PointSource]{setPointSourceCenter}{position} |
881 |
|
|
Set the center of the sphere. |
882 |
|
|
\end{methoddesc} |
883 |
|
|
|
884 |
jongui |
1035 |
\begin{methoddesc}[PointSource]{setPointSourceNumberOfPoints}{points} |
885 |
|
|
Set the number of points to generate within the sphere (the larger the |
886 |
|
|
number of points, the more streamlines are generated). |
887 |
|
|
\end{methoddesc} |
888 |
|
|
|
889 |
|
|
\subsubsection{\Sphere class} |
890 |
|
|
|
891 |
|
|
The following are some of the methods available: |
892 |
|
|
|
893 |
|
|
\begin{methoddesc}[Sphere]{setThetaResolution}{resolution} |
894 |
|
|
Set the theta resolution of the sphere. |
895 |
|
|
\end{methoddesc} |
896 |
|
|
|
897 |
|
|
\begin{methoddesc}[Sphere]{setPhiResolution}{resolution} |
898 |
|
|
Set the phi resoluton of the sphere. |
899 |
|
|
\end{methoddesc} |
900 |
|
|
|
901 |
|
|
\subsubsection{\StreamLineModule class} |
902 |
|
|
|
903 |
|
|
The following are some of the methods available: |
904 |
|
|
|
905 |
|
|
\begin{methoddesc}[StreamLineModule]{setMaximumPropagationTime}{time} |
906 |
|
|
Set the maximum length of the streamline expressed in elapsed time. |
907 |
|
|
\end{methoddesc} |
908 |
|
|
|
909 |
|
|
\begin{methoddesc}[StreamLineModule]{setIntegrationToBothDirections}{} |
910 |
|
|
Set the integration to occur both sides: forward (where the streamline |
911 |
|
|
goes) and backward (where the streamline came from). |
912 |
|
|
\end{methoddesc} |
913 |
|
|
|
914 |
|
|
\subsubsection{\Transform class} |
915 |
|
|
|
916 |
|
|
\begin{methoddesc}[Transform]{translate}{x_offset, y_offset, z_offset} |
917 |
|
|
Translate the rendered object along the x, y and z-axes. |
918 |
|
|
\end{methoddesc} |
919 |
|
|
|
920 |
|
|
\begin{methoddesc}[Transform]{rotateX}{angle} |
921 |
|
|
Rotate the plane along the x-axis. |
922 |
|
|
\end{methoddesc} |
923 |
|
|
|
924 |
|
|
\begin{methoddesc}[Transform]{rotateY}{angle} |
925 |
|
|
Rotate the plane along the y-axis. |
926 |
|
|
\end{methoddesc} |
927 |
|
|
|
928 |
|
|
\begin{methoddesc}[Transform]{rotateZ}{angle} |
929 |
|
|
Rotate the plane along the z-axis. |
930 |
|
|
\end{methoddesc} |
931 |
|
|
|
932 |
|
|
\begin{methoddesc}[Transform]{setPlaneToXY}{offset = 0} |
933 |
|
|
Set the plane orthogonal to the z-axis. |
934 |
|
|
\end{methoddesc} |
935 |
|
|
|
936 |
|
|
\begin{methoddesc}[Transform]{setPlaneToYZ}{offset = 0} |
937 |
|
|
Set the plane orthogonal to the x-axis. |
938 |
|
|
\end{methoddesc} |
939 |
|
|
|
940 |
|
|
\begin{methoddesc}[Transform]{setPlaneToXZ}{offset = 0} |
941 |
|
|
Set the plane orthogonal to the y-axis. |
942 |
|
|
\end{methoddesc} |
943 |
|
|
|
944 |
|
|
\subsubsection{\Tube class} |
945 |
|
|
|
946 |
|
|
\begin{methoddesc}[Tube]{setTubeRadius}{radius} |
947 |
|
|
Set the radius of the tube. |
948 |
|
|
\end{methoddesc} |
949 |
|
|
|
950 |
|
|
\begin{methoddesc}[Tube]{setTubeRadiusToVaryByVector}{} |
951 |
|
|
Set the radius of the tube to vary by vector data. |
952 |
|
|
\end{methoddesc} |
953 |
|
|
|
954 |
|
|
\begin{methoddesc}[Tube]{setTubeRadiusToVaryByScalar}{} |
955 |
|
|
Set the radius of the tube to vary by scalar data. |
956 |
|
|
\end{methoddesc} |
957 |
|
|
|
958 |
|
|
\subsubsection{\Warp class} |
959 |
|
|
|
960 |
|
|
\begin{methoddesc}[Warp]{setScaleFactor}{scale_factor} |
961 |
|
|
Set the displacement scale factor. |
962 |
|
|
\end{methoddesc} |
963 |
|
|
|
964 |
jongui |
1079 |
\subsubsection{\MaskPoints class} |
965 |
jongui |
1035 |
|
966 |
jongui |
1079 |
\begin{methoddesc}[MaskPoints]{setRatio}{ratio} |
967 |
|
|
Mask every nth point. |
968 |
|
|
\end{methoddesc} |
969 |
|
|
|
970 |
|
|
\begin{methoddesc}[MaskPoints]{randomOn}{} |
971 |
|
|
Enables the randomization of the points selected for masking. |
972 |
|
|
\end{methoddesc} |
973 |
|
|
|
974 |
jongui |
1082 |
\section{Online Rendering Mechnism} |
975 |
jongui |
1079 |
|
976 |
jongui |
1082 |
%\begin{table} |
977 |
|
|
%\begin{tabular}{c c c} |
978 |
|
|
%\includegraphics[width=\thumbnailwidth]{figures/Map} & |
979 |
|
|
%\includegraphics[width=\thumbnailwidth]{figures/MapOnPlaneCut} & |
980 |
|
|
%\includegraphics[width=\thumbnailwidth]{figures/MapOnPlaneClip} \\ |
981 |
|
|
%1 & 2 & 3 \\ |
982 |
|
|
%\includegraphics[width=\thumbnailwidth]{figures/MapOnScalarClip} \\ |
983 |
|
|
%4 |
984 |
|
|
%\end{tabular} |
985 |
|
|
%\end{table} |
986 |
jongui |
1079 |
|
987 |
|
|
|
988 |
|
|
|
989 |
jongui |
1035 |
|
990 |
|
|
|
991 |
|
|
|
992 |
jongui |
1081 |
|
993 |
jongui |
1082 |
|
994 |
jongui |
1081 |
%\begin{tabular}{c c c c} |
995 |
|
|
%\includegraphics[scale = 0.1]{figures/EscriptDiagram1} & |
996 |
|
|
%\includegraphics[scale = 0.1]{figures/EscriptDiagram1} & |
997 |
|
|
%\includegraphics[scale = 0.1]{figures/EscriptDiagram1} & |
998 |
|
|
%\includegraphics[scale = 0.1]{figures/EscriptDiagram1} \\ |
999 |
|
|
%1 & 2 & 3 & 4 |
1000 |
|
|
%\end{tabular} |
1001 |
|
|
|
1002 |
|
|
|
1003 |
jongui |
1002 |
same word on rendering, off-line, on-line, how to rotate, zoom, close the window, ... |
1004 |
gross |
999 |
|
1005 |
jongui |
1002 |
%============================================== |
1006 |
|
|
\section{How to Make a Movie} |