Parent Directory
|
Revision Log
Sticky Revision: |
doxygen doco now correctly reports global project revsion number. link to epydoc is now via a relative URL (avoids the problem of nightly doxygen pointing at out of date release epydoc).
Merging version 2269 to trunk
Fixed a little doxygen typo in the hyperlink.
Added showEscriptParams to output a list of available params.
Data::copySelf() now returns an object instead of a pointer. Fixed a bug in copyFromArray relating to expanded data.
This commit cleans up the incompressible solver and adds a DarcyFlux solver in model module. Some documentation for both classes has been added. The convection code is only linear at the moment.
Fixed a warning in cpp unit tests under dodebug Pointed the url for python doco at shake200 rather than iservo. Added support for trace and transpose to LazyData. Fixed bug in trace to initialise running totals.
Two changes. 1. Move blocktimer from escript to esysUtils. 2. Make it possible to link to paso as a DLL or .so. Should have no effect on 'nix's In respect of 1., blocktimer had begun to spring up everywhere, so for the moment I thought it best to move it to the only other library that pops up all over the place. In respect of 2., paso needed to be a DLL in order to use the windows intelc /fast option, which does aggressive multi-file optimisations. Even in its current form, it either vectorises or parallelises hundreds more loops in the esys system than appear in the pragmas. In achieving 2. I have not been too delicate in adding PASO_DLL_API declarations to the .h files in paso/src. Only toward the end of the process of the conversion, when the number of linker errors dropped below 20, say, did I choosy about what functions in a header I declared PASO_DLL_API. As a result, there are likely to be many routines declared as external functions symbols that are in fact internal to the paso DLL. Why is this an issue? It prevents the intelc compiler from getting aggressive on the paso module. With pain there is sometimes gain. At least all the DLL rules in windows give good (non-microsoft) compiler writers a chance to really shine. So, if you should see a PASO_DLL_API on a function in a paso header file, and think to yourself, "that function is only called in paso, why export it?", then feel free to delete the PASO_DLL_API export declaration. Here's hoping for no breakage.....
I may get into trouble for this. boost-python 1.34 does have a docstring_options class, but does not have a 3 argument constructor for it. So the test has been modified to #if ((BOOST_VERSION/100)%1000 > 34) || (BOOST_VERSION/100000 >1) If you wish to make things more delicate, one can define a 2 argument construction of docopt just for 1.34 (with an #elif). Probably not worth the effort frankly. Hope that this has not broken anything for anyone else. The SVN logs suggest this is a little fragile..... Also, please be aware that much of our chemistry interface code, that we wish to use with escript, makes extensive use of boost python. Having two different boost versions mucking with the python interpreter sounds like a really bad idea, I'm sure you'll agree. The problem is that it is not a simple task for us to build new versions of boost-python on all our platforms. Consequently, it would be nice to be informed when you guys intend to upgrade a support library of this nature so that we can plan and allocate resources to keep up. Cheers.
docstring_options was added in boost 1.34 - macro'd it out if you are compiling earlier than that.
Fixing some warnings from epydoc. Disabled c++ signatures in python docstrings. Removed references to Bruce in epydoc and the users guide.
Bringing all changes across from schroedinger. (Note this does not mean development is done, just that it will happen on the trunk for now). If anyone notices any problems please contact me.
Modified Data::toString() so it doesn't throw on DataEmpty. Added setEscriptParamInt and getEscriptParamInt as free functions. At the moment all they do is allow you to set the param TOO_MANY_LINES. This is used to determine when printing a Data object will show you the points and when it will print a summary. I've set the default value back to 80 lines. If you need to see more lines use (in python): setEscriptParamInt("TOO_MANY_LINES",80000)
convection.py checkpointing uses mkdir/rmdir, and under MPI there was a race condition. mkdir needs to be run on only one CPU and then a barrier to prevent working processors from using the directory before it exists. Added methods domain.MPIBarrier and domain.onMasterProcessor() to implement this technique. A more general solution might be possible in the future.
Closing the moreshared branch
first version of testing for transport solver.
modification on LinearPDE class and a first version of Transport class
Copyright updated in all files
Added python-level methods getMPISizeWorld() and getMPIRankWorld() for MPI process info. Test suite run_inputOutput.py runs on any number of cores now, hybrid may still be a problem.
All about making DataEmpty instances throw. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Exposed getDim from AbstractDomain to python to fix bug. Added isEmpty member to DataAbstract to allow it to throw is queries are made about a DataEmpty instance. Added exceptions to DataAbstract, DataEmpty and Data to prevent calls being made against DataEmpty objects. The following still work as expected on DataEmpty instances copy, getDomain, getFunctionSpace, isEmpty, isExpanded, isProtected, isTagged, setprotection. You can also call interpolate, however it should throw if you try to change FunctionSpaces.
Fixed serialization of I/O for MPI...code didn't compile without MPI
Serialized parallel I/O when writing mesh or data to NetCDF file on multiple MPI processors. Added domain method getMPIComm() to complement getMPISize() and getMPIRank().
Added Data::copySelf() [Note: this is exposed as copy() in python]. This method returns a pointer to a deep copy of the target. There are c++ tests but no python tests for this yet. All DataAbstracts now have a deepCopy() which simplifies the implementation of the compy methods.
Merged noarrayview branch onto trunk.
method returning reference id added to FunctionSpace class
getListOfTags method added to FunctionSpace class
Merge in /branches/windows_from_1456_trunk_1620_merged_in branch. You will find a preserved pre-merge trunk in tags under tags/trunk_at_1625. That will be useful for diffing & checking on my stupidity. Here is a list of the conflicts and their resolution at this point in time. ================================================================================= (LLWS == looks like white space). finley/src/Assemble_addToSystemMatrix.c - resolve to branch - unused var. may be wrong..... finley/src/CPPAdapter/SystemMatrixAdapter.cpp - resolve to branch - LLWS finley/src/CPPAdapter/MeshAdapter.cpp - resolve to branch - LLWS paso/src/PCG.c - resolve to branch - unused var fixes. paso/src/SolverFCT.c - resolve to branch - LLWS paso/src/FGMRES.c - resolve to branch - LLWS paso/src/Common.h - resolve to trunk version. It's omp.h's include... not sure it's needed, but for the sake of saftey..... paso/src/Functions.c - resolve to branch version, indentation/tab removal and return error on bad unimplemented Paso_FunctionCall. paso/src/SolverFCT_solve.c - resolve to branch version, unused vars paso/src/SparseMatrix_MatrixVector.c - resolve to branch version, unused vars. escript/src/Utils.cpp - resloved to branch, needs WinSock2.h escript/src/DataExpanded.cpp - resolved to branch version - LLWS escript/src/DataFactory.cpp - resolve to branch version ================================================================================= This currently passes tests on linux (debian), but is not checked on windows or Altix yet. This checkin is to make a trunk I can check out for windows to do tests on it. Known outstanding problem is in the operator=() method of exceptions causing warning messages on the intel compilers. May the God of doughnuts have mercy on my soul.
Added python method printParallelThreadCounts() to tell how many MPI CPUs and OpenMP threads we are using (for testing hybrid runs)
Merge the changes to these few files with the windows port branch to test just these changes under linux and altix.
some more work on the transport solver.
new upwinding algorithm (still fails)
And get the *(&(*&(* name right
Restore the trunk that existed before the windows changes were committed to the (now moved to branches) old trunk.
Make a temp copy of the trunk before checking in the windows changes
explicit upwinding scheme added.
finley interface to paso's transport solver added.
Copied a handful of files from trunk-mpi-branch into trunk
The MPI branch is hereby closed. All future work should be in trunk. Previously in revision 1295 I merged the latest changes to trunk into trunk-mpi-branch. In this revision I copied all files from trunk-mpi-branch over the corresponding trunk files. I did not use 'svn merge', it was a copy.
New python method getVersion() which returns the Subversion revision from which escript was compiled
Some changes to make things run on windows. There is still a problem with netcdf an long file names on windows but there is the suspicion that this is a bigger problem related to boost (compiler options). In fact runs with large numbers of iteration/time steps tend to create seg faults.
This version passes the tests on windows except for * vtk * netCDF The version needs to be tested on altix and linux
problem with reset of faulty PDE rhs fixed.
reduced integration schemes are implemented now for grad, integrate, etc. Tests still to be added.
first steps toward reduced element integration order. The escript bit is done but the finley part still needs work.
clear name tagging is supported now.
In VC++ boost has problems with numarray arguments from python. This fixes that problem by taking python::object arguments from the python level and converting it into python::numeric::array on the C++ level. This hasn't been tested with VC++ yet. Moreover the two Data methods dealing with big numarrays as argument and return value have been removed.
netCDF can now be switched off at compilation. Consequently load and dump of data objects is not available then.
The set/getRefVal functions of Data objects have been removed (mainly to avoid later problems with MPI). Moreover, a faster access to the reference id of samples has been introduced. I don't think that anybody will profit form this at this stage but it will allow a faster dump of data objects.
escript data objects can now be saved to netCDF files, see http://www.unidata.ucar.edu/software/netcdf/. Currently only constant data are implemented with expanded and tagged data to follow. There are two new functions to dump a data object s=Data(...) s.dump(<filename>) and to recover it s=load(<filename>, domain) Notice that the function space of s is recovered but domain is still need. dump and load will replace archive and extract. The installation needs now the netCDF installed.
setValueOfDataPoint accepts double value as argument now
I have done some clarification on functions that allow to access individual data point values in a Data object. The term "data point number" is always local on a MPI process and referes to the value (data_point_in_sample, sample) as a single identifyer (data_point_in_sample + sample * number_data_points_per_sample). a "global data point number" referes to a tuple of a processour id and local data point number. The function convertToNumArrayFromSampleNo has been removed now and convertToNumArrayFromDPNo renamed to getValueOfDataPoint. There are two new functions: getNumberOfDataPoints setValueOfDataPoint This allows you to do things like: in=Data(..) out=Data(..) for i in xrange(in.getNumberOfDataPoints()) in_loc=in.getValueOfDataPoint(i) out_loc=< some operations on in_loc> out.setValueOfDataPoint(i,out_loc) Also mindp is renamed to minGlobalDataPoint and there is a new function getValueOfGlobalDataPoint. While in MPI the functions getNumberOfDataPoints and getValueOfDataPoint are working locally on each process (so the code above is executed in parallel). the latter allows getting a single value across all processors.
access to the number of samples added
Added erf (error function) implementation
Tensor products for Data objects are now computed by a C++ method C_GeneralTensorProduct, which calls C function matrix_matrix_product to do the actual calculation. Can perform product with either input transposed in place, meaning without first computing the transpose in a separate step.
the new function swap_axes + tests added. (It replaces swap).
new function _swap. Python wrapper + testing is still missing.
function added to manually free unused memory in the memory manager
coordinates, element size and normals returned by corresponding FunctionSpace mesthods are now protected against updates. So +=, -=, *=, /=, setTaggedValue, fillFromNumArray will through an excpetion. The FunctionSpace class does nut buffer the oordinates, element size and normals yet.
Modified the following python methods in escript/py_src/util.py to call faster C++ methods: escript_trace escript_transpose escript_symmetric escript_nonsymmetric
new FunctionSpace method setTags added
now float**Data is running
test with tagged data pass now
DataVariable removed. it is not used.
+ NEW BUILD SYSTEM This commit contains the new build system with cross-platform support. Most things work are before though you can have more control. ENVIRONMENT settings have changed: + You no longer require LD_LIBRARY_PATH or PYTHONPATH to point to the esysroot for building and testing performed via scons + ACcESS altix users: It is recommended you change your modules to load the latest intel compiler and other libraries required by boost to match the setup in svn (you can override). The correct modules are as follows module load intel_cc.9.0.026 export MODULEPATH=${MODULEPATH}:/data/raid2/toolspp4/modulefiles/gcc-3.3.6 module load boost/1.33.0/python-2.4.1 module load python/2.4.1 module load numarray/1.3.3
More copyright information.
test_utilOnFinley fixed (did run the tests that still fail)
some steps towards eigenvalue and eigenvector calculation
modify whereZero etc methods to also accept a tolerance parameter
rationalise #includes and forward declarations
restructure escript source tree move src/Data/* -> src remove inc modify #includes and cpppath settings accordingly
reorganised esysUtils to remove inc directory
intreface for setting the number of threads from python
Updated link to epydoc generated documentation on the web site.
length method is removed as it is too slow. use length in util.py instead
move all directories from trunk/esys2 into trunk and remove esys2
Merge of development branch dev-02 back to main trunk on 2005-10-25
Merge of development branch dev-02 back to main trunk on 2005-09-15
Merge of development branch dev-02 back to main trunk on 2005-09-01
Merge of development branch dev-02 back to main trunk on 2005-08-23
erge of development branch dev-02 back to main trunk on 2005-08-12
Merge of development branch back to main trunk on 2005-07-22
Merge of development branch back to main trunk on 2005-07-08
Merge of development branch back to main trunk on 2005-06-09
Merge of development branch back to main trunk on 2005-05-06
*** empty log message ***
*** empty log message ***
*** empty log message ***
*** empty log message ***
*** empty log message ***
*** empty log message ***
This form allows you to request diffs between any two revisions of this file. For each of the two "sides" of the diff, enter a numeric revision.
ViewVC Help | |
Powered by ViewVC 1.1.26 |